Nvidia Tao Deploy install is failing when trying to install on container from Orin Jetson AGX nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel

Hi, I’m trying to install nvidia-tao-deploy on my device and following the provided instructions results in failure.
Here’s the link of instructions i have followed:

To reproduce the issue:

  1. Run this command sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel
  2. inside the container i try running pip install nvidia-tao-deploybut it fails with
    `Collecting nvidia-tao-deploy
    Downloading nvidia_tao_deploy-4.0.0.1-py3-none-any.whl (2.5 MB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 MB 13.2 MB/s eta 0:00:00
    Collecting onnx
    Downloading onnx-1.16.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (15.8 MB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 15.8/15.8 MB 67.4 MB/s eta 0:00:00
    Collecting opencv-python
    Downloading opencv_python-4.10.0.84-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (41.7 MB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 41.7/41.7 MB 44.9 MB/s eta 0:00:00
    Collecting matplotlib>=3.0.3
    Downloading matplotlib-3.9.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (8.2 MB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.2/8.2 MB 57.9 MB/s eta 0:00:00
    Collecting protobuf==3.20.1
    Downloading protobuf-3.20.1-cp310-cp310-manylinux2014_aarch64.whl (917 kB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 917.9/917.9 KB 38.9 MB/s eta 0:00:00
    Collecting hydra-core==1.2.0
    Downloading hydra_core-1.2.0-py3-none-any.whl (151 kB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 151.1/151.1 KB 23.2 MB/s eta 0:00:00
    Collecting scikit-learn==0.24.2
    Downloading scikit-learn-0.24.2.tar.gz (7.5 MB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.5/7.5 MB 56.9 MB/s eta 0:00:00
    Installing build dependencies: started
    Installing build dependencies: finished with status ‘done’
    Getting requirements to build wheel: started
    Getting requirements to build wheel: finished with status ‘done’
    Preparing metadata (pyproject.toml): started
    Preparing metadata (pyproject.toml): still running…
    Preparing metadata (pyproject.toml): finished with status ‘error’
    error: subprocess-exited-with-error

× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [1596 lines of output]
Partial import of sklearn during the build process.
/usr/lib/python3.10/importlib/init.py:126: UserWarning: A NumPy version >=1.23.5 and <2.3.0 is required
log.txt (193.3 KB)

  The above exception was the direct cause of the following exception:
  
  Traceback (most recent call last):
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
  [ 1/53] Cythonizing sklearn/__check_build/_check_build.pyx
  [ 2/53] Cythonizing sklearn/_isotonic.pyx
  [ 3/53] Cythonizing sklearn/cluster/_dbscan_inner.pyx
  [ 4/53] Cythonizing sklearn/cluster/_hierarchical_fast.pyx
  [ 5/53] Cythonizing sklearn/cluster/_k_means_elkan.pyx
  [ 6/53] Cythonizing sklearn/cluster/_k_means_fast.pyx
  [ 7/53] Cythonizing sklearn/cluster/_k_means_lloyd.pyx
  [ 8/53] Cythonizing sklearn/datasets/_svmlight_format_fast.pyx
  [ 9/53] Cythonizing sklearn/decomposition/_cdnmf_fast.pyx
  [10/53] Cythonizing sklearn/decomposition/_online_lda_fast.pyx
  [11/53] Cythonizing sklearn/ensemble/_gradient_boosting.pyx
  [12/53] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_binning.pyx
  [13/53] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_bitset.pyx
  [14/53] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_gradient_boosting.pyx
  [15/53] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_loss.pyx
  [16/53] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_predictor.pyx
  [17/53] Cythonizing sklearn/ensemble/_hist_gradient_boosting/common.pyx
  [18/53] Cythonizing sklearn/ensemble/_hist_gradient_boosting/histogram.pyx
  [19/53] Cythonizing sklearn/ensemble/_hist_gradient_boosting/splitting.pyx
  [20/53] Cythonizing sklearn/ensemble/_hist_gradient_boosting/utils.pyx
  [21/53] Cythonizing sklearn/feature_extraction/_hashing_fast.pyx
  [22/53] Cythonizing sklearn/linear_model/_cd_fast.pyx
  [23/53] Cythonizing sklearn/linear_model/_sag_fast.pyx
  [24/53] Cythonizing sklearn/linear_model/_sgd_fast.pyx
  [25/53] Cythonizing sklearn/manifold/_barnes_hut_tsne.pyx
  [26/53] Cythonizing sklearn/manifold/_utils.pyx
  [27/53] Cythonizing sklearn/metrics/_pairwise_fast.pyx
  [28/53] Cythonizing sklearn/metrics/cluster/_expected_mutual_info_fast.pyx
  [29/53] Cythonizing sklearn/neighbors/_ball_tree.pyx
  [30/53] Cythonizing sklearn/neighbors/_dist_metrics.pyx
  [31/53] Cythonizing sklearn/neighbors/_kd_tree.pyx
  [32/53] Cythonizing sklearn/neighbors/_quad_tree.pyx
  [33/53] Cythonizing sklearn/neighbors/_typedefs.pyx
  [34/53] Cythonizing sklearn/preprocessing/_csr_polynomial_expansion.pyx
  [35/53] Cythonizing sklearn/svm/_liblinear.pyx
  [36/53] Cythonizing sklearn/svm/_libsvm.pyx
  [37/53] Cythonizing sklearn/svm/_libsvm_sparse.pyx
  [38/53] Cythonizing sklearn/svm/_newrand.pyx
  [39/53] Cythonizing sklearn/tree/_criterion.pyx
  [40/53] Cythonizing sklearn/tree/_splitter.pyx
  [41/53] Cythonizing sklearn/tree/_tree.pyx
  [42/53] Cythonizing sklearn/tree/_utils.pyx
  [43/53] Cythonizing sklearn/utils/_cython_blas.pyx
  [44/53] Cythonizing sklearn/utils/_fast_dict.pyx
  [45/53] Cythonizing sklearn/utils/_logistic_sigmoid.pyx
  [46/53] Cythonizing sklearn/utils/_openmp_helpers.pyx
  [47/53] Cythonizing sklearn/utils/_random.pyx
  [48/53] Cythonizing sklearn/utils/_seq_dataset.pyx
  [49/53] Cythonizing sklearn/utils/_weight_vector.pyx
  [50/53] Cythonizing sklearn/utils/arrayfuncs.pyx
  [51/53] Cythonizing sklearn/utils/graph_shortest_path.pyx
  [52/53] Cythonizing sklearn/utils/murmurhash.pyx
  [53/53] Cythonizing sklearn/utils/sparsefuncs_fast.pyx
      main()
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 164, in prepare_metadata_for_build_wheel
      return hook(metadata_directory, config_settings)
    File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 174, in prepare_metadata_for_build_wheel
      self.run_setup()
    File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 267, in run_setup
      super(_BuildMetaLegacyBackend,
    File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 158, in run_setup
      exec(compile(code, __file__, 'exec'), locals())
    File "setup.py", line 301, in <module>
      setup_package()
    File "setup.py", line 297, in setup_package
      setup(**metadata)
    File "/usr/lib/python3/dist-packages/numpy/distutils/core.py", line 135, in setup
      config = configuration()
    File "setup.py", line 188, in configuration
      config.add_subpackage('sklearn')
    File "/usr/lib/python3/dist-packages/numpy/distutils/misc_util.py", line 1014, in add_subpackage
      config_list = self.get_subpackage(subpackage_name, subpackage_path,
    File "/usr/lib/python3/dist-packages/numpy/distutils/misc_util.py", line 980, in get_subpackage
      config = self._get_configuration_from_setup_py(
    File "/usr/lib/python3/dist-packages/numpy/distutils/misc_util.py", line 922, in _get_configuration_from_setup_py
      config = setup_module.configuration(*args)
    File "/tmp/pip-install-q_62ve15/scikit-learn_43d84ea2c64e496c99f2389b041220e6/sklearn/setup.py", line 83, in configuration
      cythonize_extensions(top_path, config)
    File "/tmp/pip-install-q_62ve15/scikit-learn_43d84ea2c64e496c99f2389b041220e6/sklearn/_build_utils/__init__.py", line 70, in cythonize_extensions
      config.ext_modules = cythonize(
    File "/tmp/pip-build-env-sk18gr2r/overlay/local/lib/python3.10/dist-packages/Cython/Build/Dependencies.py", line 1145, in cythonize
      result.get(99999)  # seconds
    File "/usr/lib/python3.10/multiprocessing/pool.py", line 774, in get
      raise self._value
  Cython.Compiler.Errors.CompileError: sklearn/ensemble/_hist_gradient_boosting/splitting.pyx
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
`
I’ve attached the full log.txt file for this issue.
What are the next steps to make this work? I need tao-deploy to be able to quantize, calibrate, and build an engine file for an existing model

• Hardware (Model: NVIDIA Jetson AGX Orin Developer Kit - Jetpack 6.0 [L4T 36.3.0])
• Network Type (Classification)
• TRT Version (nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel)

Please refer to GitHub - NVIDIA/tao_deploy: Package for deploying deep learning models from TAO Toolkit or Tao-deploy on Orin AGX CLI Error - #15 by Morganh.

Thanks for your response will try this out and let you know how it goes

Ok, so i tried the steps there, and they don’t seem to work with the tensorrt version we need to run,

docker run --runtime=nvidia -it --rm -v /home/teknoir/taddeus:/workspace nvcr.io/nvidia/l4t-tensorrt:r8.6.2-devel /bin/bash

and installing nvidia_tao_deploy==5.0.0.423.dev0 seems to require python 3.8x, I’ve installed python 3.8 in this container and proceeded with the steps but it seems that now I’m missing tensorrt
logs.txt (34.1 KB)
It looks like i need to build tensorrt from source since it won’t install with pip install tensorrt…
The steps I took for that were these, but they didn’t produce the whls. Could you look at the steps I took?

Building TensorRT from Source

  1. Download TensorRT OSS: TensorRT’s open-source components are available on GitHub. Clone the repository:
git clone https://github.com/NVIDIA/TensorRT.git
cd TensorRT
  1. Initialize and Update Submodules: TensorRT uses submodules for certain components, including the ONNX parser. You need to ensure that all submodules are initialized and updated. Run the following commands in the root directory of your TensorRT source:
git submodule update --init --recursive
  1. Prepare the Environment: Ensure that all necessary dependencies are installed. This includes CUDA, cuDNN, and other libraries. Set up the environment variables:
export CUDA_HOME=/usr/local/cuda
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
  1. Run CMake and Make: Once the submodules are updated and verified, you can re-run the CMake and make commands:
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCUDA_VERSION=$(nvcc --version | grep release | awk '{print $6}' | cut -c2-)
make -j$(nproc)
  1. Set pyconfig headers pyconfig.h
export PY_CONFIG_INCLUDE=/root/miniconda3/pkgs/python-3.8.19-h4bb2201_0/include/python3.8
cmake .. -DPY_CONFIG_INCLUDE=$PY_CONFIG_INCLUDE
  1. Install Python Bindings: After building, you can install the Python bindings. Navigate to the Python directory and build the wheel:
cd TensorRT/python
TENSORRT_MODULE=tensorrt PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=8 TARGET_ARCHITECTURE=aarch64 ./build.sh
python -m pip install ./build/bindings_wheel/dist/tensorrt-*.whl
  1. Verify Python Installation: Test the installation by importing TensorRT in Python:
import tensorrt as trt
print(trt.__version__)

These steps didn’t work either

// Nvidia forum recommendation
docker run --runtime=nvidia -it --rm -v nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel /bin/bash                                                      

// Then, inside the docker,

apt-get update
apt install libopenmpi-dev
pip install nvidia_tao_deploy==5.0.0.423.dev0
pip install https://urm.nvidia.com/artifactory/sw-eff-pypi/nvidia-eff-tao-encryption/0.1.7/nvidia_eff_tao_encryption-0.1.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
pip install https://urm.nvidia.com/artifactory/sw-eff-pypi/nvidia-eff/0.6.2/nvidia_eff-0.6.2-py38-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl

detectnet_v2 --help 

I attached another log for these commands, it seems there are more dependency issues.

logs.txt (53.8 KB)

It should be related to Jetpack6.0 since it is using Ubuntu22.04.
It previous Jetpack version, there is not this error.
We will check it further.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Any update on the issue?

Internal team is still checking on it.
As a workaround, please flash Jetpack5.x to Jetson devices.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.