Cannot use TAO Deploy in Jetson AGX Orin

I follow the TAO deploy tutorial into a container with command

sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel

then I try to

apt install libopenmpi-dev
pip install nvidia_tao_deploy==5.0.0.423.dev0
pip install https://files.pythonhosted.org/packages/f7/7a/ac2e37588fe552b49d8807215b7de224eef60a495391fdacc5fa13732d11/nvidia_eff_tao_encryption-0.1.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
pip install https://files.pythonhosted.org/packages/0d/05/6caf40aefc7ac44708b2dcd5403870181acc1ecdd93fa822370d10cc49f3/nvidia_eff-0.6.2-py38-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl

then I have

root@ubuntu:~/cqy# pip install ./nvidia_tao_deploy-5.0.0.423.dev0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl 
Processing ./nvidia_tao_deploy-5.0.0.423.dev0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Collecting opencv-python
  Using cached opencv_python-4.10.0.84-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (41.7 MB)
Collecting h5py==3.7.0
  Using cached h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (8.3 MB)
Collecting onnx
  Using cached onnx-1.17.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (15.9 MB)
Requirement already satisfied: six>=1.12.0 in /usr/lib/python3/dist-packages (from nvidia-tao-deploy==5.0.0.423.dev0) (1.16.0)
Collecting natsort
  Using cached natsort-8.4.0-py3-none-any.whl (38 kB)
Collecting requests>=2.31.0
  Downloading requests-2.32.3-py3-none-any.whl (64 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.9/64.9 KB 57.2 kB/s eta 0:00:00
Collecting seaborn==0.7.1
  Using cached seaborn-0.7.1.tar.gz (158 kB)
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error
  
  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [42 lines of output]
      running egg_info
      creating /tmp/pip-pip-egg-info-v051cj7r/seaborn.egg-info
      writing /tmp/pip-pip-egg-info-v051cj7r/seaborn.egg-info/PKG-INFO
      writing dependency_links to /tmp/pip-pip-egg-info-v051cj7r/seaborn.egg-info/dependency_links.txt
      writing requirements to /tmp/pip-pip-egg-info-v051cj7r/seaborn.egg-info/requires.txt
      writing top-level names to /tmp/pip-pip-egg-info-v051cj7r/seaborn.egg-info/top_level.txt
      writing manifest file '/tmp/pip-pip-egg-info-v051cj7r/seaborn.egg-info/SOURCES.txt'
      reading manifest file '/tmp/pip-pip-egg-info-v051cj7r/seaborn.egg-info/SOURCES.txt'
      reading manifest template 'MANIFEST.in'
      adding license file 'LICENSE'
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "/tmp/pip-install-ptgqp4hd/seaborn_bbd91fd0a2ac428fa055ae69bfe32017/setup.py", line 68, in <module>
          setup(name=DISTNAME,
        File "/usr/local/lib/python3.10/dist-packages/setuptools/__init__.py", line 117, in setup
          return distutils.core.setup(**attrs)
        File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/core.py", line 183, in setup
          return run_commands(dist)
        File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/core.py", line 199, in run_commands
          dist.run_commands()
        File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 954, in run_commands
          self.run_command(cmd)
        File "/usr/local/lib/python3.10/dist-packages/setuptools/dist.py", line 995, in run_command
          super().run_command(command)
        File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 973, in run_command
          cmd_obj.run()
        File "/usr/local/lib/python3.10/dist-packages/setuptools/command/egg_info.py", line 313, in run
          self.find_sources()
        File "/usr/local/lib/python3.10/dist-packages/setuptools/command/egg_info.py", line 321, in find_sources
          mm.run()
        File "/usr/local/lib/python3.10/dist-packages/setuptools/command/egg_info.py", line 549, in run
          self.prune_file_list()
        File "/usr/local/lib/python3.10/dist-packages/setuptools/command/sdist.py", line 162, in prune_file_list
          super().prune_file_list()
        File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/command/sdist.py", line 380, in prune_file_list
          base_dir = self.distribution.get_fullname()
        File "/usr/local/lib/python3.10/dist-packages/setuptools/_core_metadata.py", line 267, in get_fullname
          return _distribution_fullname(self.get_name(), self.get_version())
        File "/usr/local/lib/python3.10/dist-packages/setuptools/_core_metadata.py", line 285, in _distribution_fullname
          canonicalize_version(version, strip_trailing_zero=False),
      TypeError: canonicalize_version() got an unexpected keyword argument 'strip_trailing_zero'
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
root@ubuntu:~/cqy# pip3 install cython
Requirement already satisfied: cython in /usr/local/lib/python3.10/dist-packages (3.0.11)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@ubuntu:~/cqy# pip install nvidia_tao_deploy==5.0.0.423.dev0

ERROR: Could not find a version that satisfies the requirement nvidia_tao_deploy==5.0.0.423.dev0 (from versions: 4.0.0.1)
ERROR: No matching distribution found for nvidia_tao_deploy==5.0.0.423.dev0

Can you help me look at it?

Jetpack5.0 + nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel should be working.
Please follow tao_deploy/README.md at main · NVIDIA/tao_deploy · GitHub to install tao-deploy in Jetson.

But please do not flash Jetpack6.0 to Jetson due to this thread.

Thanks for reply, but I am using the Jetpack 6.0 for the another function. Can I know if I can use TAO deploy in Jetpack 6.0 because I wanna use mask groundingDino, GroundingDino, foundation pose tensorrt version.

Yes, it is possible now. In Jetpack6.0 + nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel

$ apt-get install vim
$ vim /etc/apt/sources.list   
to add below.
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ jammy main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ jammy-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ jammy-security main restricted universe multiverse

$ apt update
$ apt install libc6
$ ldd --version
$ apt install libopenmpi-dev
$ pip install nvidia_tao_deploy==5.0.0.423.dev0
$ pip install https://files.pythonhosted.org/packages/f7/7a/ac2e37588fe552b49d8807215b7de224eef60a495391fdacc5fa13732d11/nvidia_eff_tao_encryption-0.1.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
$ pip install https://files.pythonhosted.org/packages/0d/05/6caf40aefc7ac44708b2dcd5403870181acc1ecdd93fa822370d10cc49f3/nvidia_eff-0.6.2-py38-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
$ detectnet_v2 --help

I have two Jetson Device one Jetpack 5 and one Jetpack 6 I follow those command and success have correct output

root@ubuntu:~# detectnet_v2 --help
2024-12-07 06:10:04,524 [INFO] matplotlib.font_manager: generated new fontManager
Loading uff directly from the package source code
usage: detectnet_v2 [-h] [--gpu_index GPU_INDEX] [--log_file LOG_FILE]
                    {evaluate,gen_trt_engine,inference} ...

Transfer Learning Toolkit

optional arguments:
  -h, --help            show this help message and exit
  --gpu_index GPU_INDEX
                        The index of the GPU to be used.
  --log_file LOG_FILE   Path to the output log file.

tasks:
  {evaluate,gen_trt_engine,inference}

but Jetpack 6 with failure output

root@ubuntu:~# detectnet_v2 --help
Traceback (most recent call last):
  File "/usr/local/bin/detectnet_v2", line 8, in <module>
    sys.exit(main())
  File "<frozen cv.detectnet_v2.entrypoint.detectnet_v2>", line 24, in main
  File "<frozen cv.common.entrypoint.entrypoint_proto>", line 212, in launch_job
  File "<frozen cv.common.entrypoint.entrypoint_proto>", line 64, in get_modules
  File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 848, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "</usr/local/lib/python3.8/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/scripts/evaluate.py>", line 3, in <module>
  File "<frozen cv.detectnet_v2.scripts.evaluate>", line 40, in <module>
  File "</usr/local/lib/python3.8/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/inferencer.py>", line 1, in <module>
  File "<frozen cv.detectnet_v2.inferencer>", line 25, in <module>
  File "/usr/local/lib/python3.8/dist-packages/tensorrt/__init__.py", line 68, in <module>
    from .tensorrt import *
ImportError: /lib/aarch64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /usr/lib/aarch64-linux-gnu/nvidia/libnvdla_compiler.so)

when I use TAO Deploy in Jetpack 5 I used follow command

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

source $HOME/.cargo/env

pip install tokenizers==0.21.0


python setup_l4t.py install

pip list | grep nvidia-tao-deploy

then I successfully install nvidia-tao-deploy

root@ubuntu:/app/tao/tao_deploy# pip list | grep nvidia-tao-deploy
nvidia-tao-deploy         5.0.0.423.dev0

then I try get engine of Mask GroundingDino inside repo TAO Deploy

python scripts/gen_trt_engine.py --config-path hydra_config/default_config.py

then I have error

root@ubuntu:/app/tao/tao_deploy/nvidia_tao_deploy/cv/mask_grounding_dino# python scripts/gen_trt_engine.py --config-path hydra_config/default_config.py
Traceback (most recent call last):
  File "scripts/gen_trt_engine.py", line 24, in <module>
    from nvidia_tao_deploy.cv.grounding_dino.engine_builder import GDINODetEngineBuilder
ModuleNotFoundError: No module named 'nvidia_tao_deploy.cv.grounding_dino'
root@ubuntu:/app/tao/tao_deploy/nvidia_tao_deploy/cv/mask_grounding_dino# 

For Jetpack6.0, please run inside the docker.
$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel /bin/bash
Then, run above steps again to double check.

These are not steps mentioned mentioned in the github(GitHub - NVIDIA/tao_deploy: Package for deploying deep learning models from TAO Toolkit). You can follow it.

Grouding_dino is the new feature in TAO5.5. So, the tao-deploy 5.0 version does not cover it. You can git clone GitHub - NVIDIA/tao_deploy at tao_5.5_release to the docker and try to run.

This issue occurred while using TAO deploy with Jetpack 5.0, and I was indeed working with TAO 5.5. Below is my log after reinstalling the setup using the setup_l4t.py script under the TAO 5.5 release.

root@ubuntu:/app/tao/tao_deploy/nvidia_tao_deploy/cv/mask_grounding_dino# export PYTHONPATH=$PYTHONPATH:/app/tao/tao_deploy/nvidia_tao_deploy
root@ubuntu:/app/tao/tao_deploy/nvidia_tao_deploy/cv/mask_grounding_dino# python scripts/gen_trt_engine.py --config-path hydra_config/default_config.py
Traceback (most recent call last):
  File "scripts/gen_trt_engine.py", line 24, in <module>
    from nvidia_tao_deploy.cv.grounding_dino.engine_builder import GDINODetEngineBuilder
ModuleNotFoundError: No module named 'nvidia_tao_deploy.cv.grounding_dino'
root@ubuntu:/app/tao/tao_deploy/nvidia_tao_deploy/cv/mask_grounding_dino# git branch -a 
  main
* tao_5.5_release
  remotes/origin/HEAD -> origin/main
  remotes/origin/dependabot/pip/docker/cryptography-42.0.4
  remotes/origin/dependabot/pip/docker/scikit-learn-1.0.1
  remotes/origin/dependabot/pip/docker/scipy-1.11.1
  remotes/origin/main
  remotes/origin/tao_5.5_release

然后still cannot use. Can I have a Jetpack container to use TAO Deploy?

Can I know if there is another way to use mask grounding dino tensorrt version of TAO toolkit

Because it is not found in your working path or your PYTHONPATH by export PYTHONPATH=$PYTHONPATH:/app/tao/tao_deploy/nvidia_tao_deploy.
You can check nvidia_tao_deploy.cv.grounding_dino.engine_builder and confirm it is available.
Also, if needed, please note that you can modify the existing 5.5 code to make it work in 5.0 deploy docker.

BTW, for DINO inference in Jetson device, one existing way is to use deepstream_tao_apps github. The PeopleNet Transformer is actually trained by DINO. Config file can be found in
deepstream_tao_apps/configs/nvinfer/peoplenet_transformer_tao/pgie_peoplenet_transformer_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub.
Doc is in Deploying to DeepStream for DINO - NVIDIA Docs.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.