Experiment 4: Pose tracking
This uses ml5 and a pre-trained, remotely located PoseNet model to detect a human body in an image which is randomly selected from images from the 2022 Festival catalog. (Reload the page for a new image). The model can detect 17 points on the human body, from eyes and ears to hands and feet, and was trained primarily on fully visible bodies. What and who is seen? What can be inferred about individual and collective actions?
This version only looks for a single person; future work could create a version that tracks multiple people, then move on to video. Find the code here