Motion Capture Using Jetson

Hello Nvidia Jetson Community! - I am a high school math teacher with a little bit of background in programming and botball-style robotics. I am also a graphic designer and 3D animator and I know a little about neural nets from the early days of 1995. My students are interested in building a home-made MOCAP system. We want to capture human motion into Maya or Blender or 3DSMAX. When I looked over the Jetson projects on this site I saw several very cool examples of what look like motion capture systems using the Jetson. I see the “Human Pose Estimation” project and I saw a yoga pose feedback project. I would like to know if anyone thinks the Jetson system is a good match for a MOCAP project. Seems like the future of MOCAP would be to eliminate all the wearable nodes and reference points and just let the AI figure it out. Seems like it would all be smoother and just better than the clunky and often jerky motion that traditional lower-budget MOCAP produces. I would love to know if you think this sounds reasonable, and if so what tech I would need to proceed. We have a rendering machine with an MSI GTX 1050 ti. Would that work for this? Are there bits of public code we could use, and are there folks in this community who might be willing to connect with us?

Thank you,

1 Like


Thanks for posting your question here.

It’s recommended to start a pose estimation from our Deepstream sample below:
It can give you some basic human pose estimation from the camera.

Guess that there are some differences in accuracy and result compared to the traditional MOCAP system.
Especially these estimations are based on some 2D images only.

Sorry that we don’t have too much experience with the MOCAP system.
Maybe other users can share their experiences here.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.