Hand Tracking on Jetson Orin Nano

Hello!

Hopefully someone’s got a solution here. I’ve got a project I’m on that needs just simple hand tracking done on a Jetson Orin Nano (no need for gesture recognition or anything). I’ve been slamming my head against the wall trying to get mediapipe to work and it just does not work.

Does anyone have any ways in which they were able to get functional hand tracking on the Jetson Orin Nano (preferably in Python)?

Thanks y’all!

Hello Paul,

Your project seems interesting to us.

Would it be possible for you to share a bit more information?

What approach are you currently trying to use ?

What exact errors are you seeing with your media pipe ?

Regards,
Andrew.
support@proventusnova.com

Hello Andrew!

I am using the NVIDIA Jetson Orin Nano board as part of an interactive LED wall display project at a product innovation center at my university. The goal is to use hand position tracking to create responsive and immersive interactions on a full RBG LED wall for community engagement.

This, clearly, is a project suitable for the Jetson platforms, due to the ML/AI performance, tensor cores, peripheral support and small form factor.

I have attempted several times to compile Mediapipe from source to work on the Jetson Orin Nano, including following the suggested build at the following link: How to Download & Build MediaPipe on NVIDIA Jetson Xavier NX?

While this did actually compile and run, TFLite refused to use the GPU as the calculator delegate, which severely capped performance as it only used the CPU as delegate. I could only cap a around 5 FPS, which was not sufficient for my implementation. This was on Jetpack 5.1.3, so I tried on Jetpack 6.0.2, individually installing OpenCV 4.9.0 and TensorFlow JP to attempt to compile the Mediapipe package to use this instead of it’s already-included TFLite packages, but this did not work either as the only resources available for doing such are based on CUDA 10.2 and not CUDA 11.3, which is what is supported on Jetpack 5/6.

I’m been able to do object detection with DetectNet in a Docker container at >120 FPS by following the build in this link: Detect objects in video and images | Arm Learning Paths

While this, too, worked, the output was not capable of just doing hand tracking, and full object detection is beyond what I need for this project.

I have ordered a Jetson Nano so that I can use the open source builds for Jetpack 4.6.1, which is not supported on the Jetson Orin Nano’s. But, before I return my Orin board, I was curious if anyone has been able to get a hand position tracking model (not even landmark tracking) to work on the Jetson Orin Nano, most likely without mediapipe as it is very unsupported on the Jetsons.

If you have any more questions, feel free to ask! I appreciate your time and consideration.

Best,
Paul
sadofskypaul@gmail.com

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.