Hello AI World - updates for pose estimation and mono depth

Hey everyone!

I’ve integrated some new DNN vision models into the jetson-inference library and Hello AI World tutorial:

These are supported from both Python and C++. Updates have also been made for JetPack 4.6 and TensorRT 8.0, and the containers have been rebuilt back to JetPack 4.4.1. There have also been updates to jetson-utils, including cudaMemcpy from Python and drawing shapes with CUDA.

Pose Estimation (Body)

Pose Estimation (Hand)

Mono Depth

2 Likes

I don’t mean to sound ungrateful, because I’m not. The Hello AI World set of tutorials is really great. I tried following some community projects and just got lost because they are so complicated to set up.

I don’t seem to have posenet in the container as of this moment. I’m on a 2GB Nano. The imagenet, detectnet, segnet demos all work OK.

imagenet, detectnet, segnet are all there, but some are certainly missing IMO.

gary@gary-desktop:~/jetson-inference$ docker/run.sh 
reading L4T version from /etc/nv_tegra_release
L4T BSP Version:  L4T R32.5.1
size of data/networks:  621894616 bytes
CONTAINER:     dustynv/jetson-inference:r32.5.0
DATA_VOLUME:   --volume /home/gary/jetson-inference/data:/jetson-inference/data --volume /home/gary/jetson-inference/python/training/classification/data:/jetson-inference/python/training/classification/data --volume /home/gary/jetson-inference/python/training/classification/models:/jetson-inference/python/training/classification/models --volume /home/gary/jetson-inference/python/training/detection/ssd/data:/jetson-inference/python/training/detection/ssd/data --volume /home/gary/jetson-inference/python/training/detection/ssd/models:/jetson-inference/python/training/detection/ssd/models
USER_VOLUME:   
USER_COMMAND:  
V4L2_DEVICES:    --device /dev/video0  --device /dev/video1 
localuser:root being added to access control list
root@gary-desktop:/jetson-inference# cd build/
aarch64/            download-models.rc  download-models.sh
root@gary-desktop:/jetson-inference# cd build/
aarch64/            download-models.rc  download-models.sh
root@gary-desktop:/jetson-inference# cd build/aarch64/bin/
root@gary-desktop:/jetson-inference/build/aarch64/bin# ls -l
total 6584
-rwxr-xr-x 1 root root 738768 Jan 21  2021 camera-capture
-rwxr-xr-x 1 root root  14408 Jan 21  2021 camera-viewer
-rwxrwxr-x 1 root root   2245 Jan 21  2021 camera-viewer.py
-rwxrwxr-x 1 root root   3312 Jan 21  2021 cuda-examples.py
-rwxrwxr-x 1 root root   2368 Jan 21  2021 cuda-from-cv.py
-rwxrwxr-x 1 root root   2720 Jan 21  2021 cuda-from-numpy.py
-rwxrwxr-x 1 root root   2438 Jan 21  2021 cuda-to-cv.py
-rwxrwxr-x 1 root root   2121 Jan 21  2021 cuda-to-numpy.py
-rwxr-xr-x 1 root root 575240 Jan 21  2021 detectnet
-rwxr-xr-x 1 root root 575240 Jan 21  2021 detectnet-camera
-rwxrwxr-x 1 root root   3341 Jan 21  2021 detectnet-camera.py
-rwxr-xr-x 1 root root 575240 Jan 21  2021 detectnet-console
-rwxrwxr-x 1 root root   3341 Jan 21  2021 detectnet-console.py
-rwxrwxr-x 1 root root   3341 Jan 21  2021 detectnet.py
-rwxr-xr-x 1 root root 655000 Jan 21  2021 gl-display-test
-rwxrwxr-x 1 root root   2113 Jan 21  2021 gl-display-test.py
-rwxr-xr-x 1 root root 575216 Jan 21  2021 imagenet
-rwxr-xr-x 1 root root 575216 Jan 21  2021 imagenet-camera
-rwxrwxr-x 1 root root   3671 Jan 21  2021 imagenet-camera.py
-rwxr-xr-x 1 root root 575216 Jan 21  2021 imagenet-console
-rwxrwxr-x 1 root root   3671 Jan 21  2021 imagenet-console.py
-rwxrwxr-x 1 root root   3671 Jan 21  2021 imagenet.py
lrwxrwxrwx 1 root root     29 Jan 21  2021 images -> /jetson-inference/data/images
-rwxrwxr-x 1 root root   1602 Jan 21  2021 my-detection.py
-rwxrwxr-x 1 root root   1949 Jan 21  2021 my-recognition.py
lrwxrwxrwx 1 root root     31 Jan 21  2021 networks -> /jetson-inference/data/networks
-rwxr-xr-x 1 root root 580208 Jan 21  2021 segnet
-rwxrwxr-x 1 root root    124 Jan 21  2021 segnet-batch.sh
-rwxr-xr-x 1 root root 580208 Jan 21  2021 segnet-camera
-rwxrwxr-x 1 root root   4413 Jan 21  2021 segnet-camera.py
-rwxr-xr-x 1 root root 580208 Jan 21  2021 segnet-console
-rwxrwxr-x 1 root root   4413 Jan 21  2021 segnet-console.py
-rwxrwxr-x 1 root root   4413 Jan 21  2021 segnet.py
-rwxrwxr-x 1 root root   4051 Jan 21  2021 segnet_utils.py
-rwxr-xr-x 1 root root  19208 Jan 21  2021 video-viewer
-rwxrwxr-x 1 root root   2195 Jan 21  2021 video-viewer.py
root@gary-desktop:/jetson-inference/build/aarch64/bin# 
1 Like

What jetson devices will this work on? So often there is confusion in this matter. I at least find it difficult to tell from the web resources. Truth be told I bought an Xavier NX think it would do things i needed at least the Xavier AGX power to accomplish. I would trade up if it were an option but AGX is not even available now. Cheers Mates

Hi @gary17, no worries - I think you may just need to pull the latest version of the container and the GitHub repo. Try running this:

$ cd /path/to/your/jetson-inference
$ git pull origin master
$ sudo docker pull dustynv/jetson-inference:r32.5.0

# you also probably want to re-run the model downloader tool to download the poseNet models
$ cd tools
$ ./download-models.sh

If you see it downloading new stuff during the docker pull, that means it is downloading the new container.

Hi @Nick_H, as indicated at the top of the readme, it’s supported on Jetson Nano, TX1/TX2, Xavier NX, and AGX Xavier.

is this supported on jetson orin nano?
would i have to build a docker container with earlier version of jetpack?

@jcass358 yep, jetson-inference is supported on Orin and a number of new features/updates have been made:

And there are pre-built container images available for jetPack 5: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md

1 Like