Good day everyone, I just got a Jetson Nano recently and I’ve followed the basic guides on how to run the image classification (15fps), and the object detection (5fps) camera feed examples.
I’m using a 5V 2A Samsung Charger as my power supply, so I set the power mode to 5W using the command:
sudo nvpmodel -m 1
Now, I want to run Pose Estimation on the Jetson Nano. I’ve done a quick search through Google and the Nvidia Dev Forum, but there’s no guide yet.
Did I miss anything? Any leads would be very much appreciated.
This only runs the benchmark, which the Jetson Nano passes with flying colors. Is there a way to actually run a live feed human pose estimation? Similar to the imagenet and detectnet camera examples? Thank you.
Hello, I followed this instructions in the github page. If I may note that there’s no need to look up and figure out how to install caffe. It may be listed in the dependencies but the script totally installs it for you.
I just want to know how I can tell the openpose script to use the picam I have (NoIR Camera v2). My camera is detectable by cheese and the various diagnostic camera command line tools available. But when I run:
I copied the .sh script to install OpenCV4, but when I ran python3 via console and tried importing cv2, I was getting the “No Module named ‘cv2’” error. It seems that there’s something wrong with the install script.