Jetson Nano - Running Pose Estimation

Good day everyone, I just got a Jetson Nano recently and I’ve followed the basic guides on how to run the image classification (15fps), and the object detection (5fps) camera feed examples.

I’m using a 5V 2A Samsung Charger as my power supply, so I set the power mode to 5W using the command:

sudo nvpmodel -m 1

Now, I want to run Pose Estimation on the Jetson Nano. I’ve done a quick search through Google and the Nvidia Dev Forum, but there’s no guide yet.

Did I miss anything? Any leads would be very much appreciated.

Hi angelo_v

Please refer to: https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/

Thanks

This only runs the benchmark, which the Jetson Nano passes with flying colors. Is there a way to actually run a live feed human pose estimation? Similar to the imagenet and detectnet camera examples? Thank you.

Hi,

How about open pose?
https://github.com/CMU-Perceptual-Computing-Lab/openpose

It has installation script for TX2.
The steps for Nano should be similar.
https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation_jetson_tx2_jetpack3.3.md

Please noticed that pose estimation is more complicated.
It is expected to have some effect in the performance.

Thanks.

Hello, I followed this instructions in the github page. If I may note that there’s no need to look up and figure out how to install caffe. It may be listed in the dependencies but the script totally installs it for you.

I just want to know how I can tell the openpose script to use the picam I have (NoIR Camera v2). My camera is detectable by cheese and the various diagnostic camera command line tools available. But when I run:

./build/examples/openpose/openpose.bin -camera_resolution 640x480 -net_resolution 128x96

It just tells me that it can’t find my camera. Any leads would be much appreciated.

Hi,

It looks like openpose use OpenCV to enable the camera:
https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/fcf9ab8e0c711664b7ffc25e9bd9aaa26d1d8a5a/include/openpose/flags.hpp#L31

Here are the steps to add the camera support into OpenPose:

1. Make sure your camera can be enabled by the GStreamer.

2. Build OpenCV from the source.
The default OpenCV package doesn’t enable GStreamer support. You will need to build it from source.
https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.0.0_Nano.sh

3. Rebuild openpose with new OpenCV package.

Thanks.

I copied the .sh script to install OpenCV4, but when I ran python3 via console and tried importing cv2, I was getting the “No Module named ‘cv2’” error. It seems that there’s something wrong with the install script.