[Example Project] Recognizing the Body to Trigger Lens Effect
FeaturedHey everyone! I want to share with you all a project that allows you to do simple body pose recognition. It works by using the skeletal tracking, similar to the one found in the Skeletal template.
This example project makes it easy for you to create your own body pose, add it into a database, and recognize it to trigger effects in your Lens.
The Example Project
Let’s take a brief look at the project structure. In the Orthographic Camera hierarchy you can see 4 scene objects: Tap Print, Countdown Print, Current Pose and Jumping Jack. Each of these objects represents an example. They work independently from each other. By activating one deactivating the rest, you get different behaviors in the Lens.
Tap to Print
The first example is Tap Print, which helps you to take a “snapshot” of a pose on the screen. When it is active, the Lens will print to the Logger panel a snapshot of the current pose on the screen when you tap the Lens in the Preview panel. You can use this object to add a pose to the database for recognition.
So load a video with poses you want to take from, activate the Tap Print object, and tap in the Preview panel any time you want to get a snapshot of a pose. Then, copy the printed line from the Logger panel and paste it into PoseDatabase.js. Make sure you remove the timestamp, add a comma at the end of the line, and give a unique name to the entry.
Pose snapshot:
then paste it to the array of poses in the PoseDatabase.js
PoseDatabase:
If you’re curious about how a “snapshot” works, it’s actually pretty simple! In short, we can calculate the angle between each marker (red) to a reference vector along the body (green). In this case, we generate this reference by looking at a line perpendicular to the shoulders (cyan). Remembering this angle allows us to understand distinct body poses from one another by comparing the angles between the angles we’ve recorded, and the pose that is being recognized.
Countdown Print
The Countdown Print example is very similar to the Tap Print example. The difference is that it will take a pose snapshot after a short delay. You can use it if you want to take a snapshot of a pose using your web camera.
To use it, make sure you have deactivated all other examples, then activate Countdown Print and use the webcam mode in the Preview panel. Then, step away until your body is in the frame, take a desirable pose and wait until the counter does its job. You can then save your pose as you did before.
Current Pose
The Current Pose example demonstrates how the poses from the database can be recognized. It uses Behavior to apply an image when the onPoseDetected api function is called by the PoseDetector system.
As before, make sure all other examples are deactivated, activate this one. Then, step away until your body is detected. Now start taking one of the three default poses in the database. You should see an icon of the latest recognized pose on the screen!
Jumping Jack Example
Jumping Jack is the most practical example that shows how to use body pose recognition. It uses two poses--one with arms on the side of the body, and one above the head--to count Jumping Jacks. When both poses have been recognized, a jumping jack is counted.
Building your own Lens!
The core of the pose recognition project in this project is found in the PoseDetector Scene Object and the script of the same name. As mentioned above it calculates the angles of each joint in the skeleton tracking to understand how the user’s body is posed.
It exposes several APIs for you to use in your project:
addCallback([key], [callback function])
For example: script.poseDetector.api.addCallback("onBodyFound", onBodyFound);
Available keys are: onBodyFound, onBodyLost, onPoseDetected, onPoseLost
For onPoseDetected, the poseName is passed in as the first argument
activate()
For example: script.poseDetector.api.activate();
The pose detector will start processing after this is called. Usually you want to call this if there is a time in which you don’t need your Lens to calculate.
getCurrentPose()
For example: script.poseDetector.api.getCurrentPose();
Prints out the data of a pose to be put into the poseDetectorDatabase. Make sure the body is detected before calling it.
To recap
-
Skeleton Tracking allows you to detect joints of the body and we can use the angle from each joints to convert your pose into storable data
-
You can save the data printed (Tap to Print, Countdown Print examples) into a database to compare with what a snapchatter is doing to “recognize” the pose.
-
The example project comes with an onPoseDetected callback that allows you to create logic that responds to when a Snapchatter has done a pose (Current Pose, Jumping Jack examples)
I can’t wait to see what interesting things you come up with with this pose detection script! Try creating different exercises, or game Lenses that respond to the body! Next week we’ll share how we made our Jumping Jack Lens!
Cheers!
Artem
-
Saw this project on discovery page love it, its cool
Comment actions -
That's really cool! Thank you for sharing Artem! :D
Comment actions -
Oh that is very cool, Very good job and nice idea.
We could create something like the game show “hole in the wall”
Comment actions -
I really like, thank you for sharing!
Comment actions -
Hi janraps and anyone else who's encountering similar issues,
Here's a slightly improved and more robust version of the pose validation project above - The angles are now calculated relative to the phone gyro and not relative to the body. This would allow detection of poses with same relative body angles when leaning left and right like you tried.
This version also calculates full angle range [0, 2pi] instead of [0, pi] so that would address the flip you experienced as well.
* note that the poses are a little different than the original poses and more similar to the poses in your pictures.
Hope that helps!
Best,
Matan
Comment actions -
Is this good idea use the lesnstudio service for normal and small level projects? Which type of services is good for small business.
Comment actions
Please sign in to leave a comment.
Have a comment?