Nvidia container not mounting tensorRT

I am using l4t-base:r32.6.1 to build a docker image. The image should mount all the necessary files from CUDA, tensorrt from host machine to the docker image. But it failed to mount which causing building error for my codes.

I can find the NvInfer.h in my host machine:
run sudo find / -name NvInfer.h:
/usr/include/aarch64-linux-gnu/NvInfer.h

run find / -name NvInfer.h in the docker image, it showed nothing.

The mapping files existed in /etc/nvidia-container-runtime/host-files-for-container.dare cuda.csv, cudnn.csv, l4t.csv, tensorrt.csv

cat /etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv
lib, /usr/lib/aarch64-linux-gnu/libnvinfer.so.8.0.1
lib, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.0.1
lib, /usr/lib/aarch64-linux-gnu/libnvparsers.so.8.0.1
lib, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8.0.1
sym, /usr/lib/aarch64-linux-gnu/libnvinfer.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvparsers.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvinfer.so
sym, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
sym, /usr/lib/aarch64-linux-gnu/libnvparsers.so
sym, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so
lib, /usr/include/aarch64-linux-gnu/NvInfer.h
lib, /usr/include/aarch64-linux-gnu/NvInferRuntime.h
lib, /usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h
lib, /usr/include/aarch64-linux-gnu/NvInferVersion.h
lib, /usr/include/aarch64-linux-gnu/NvInferImpl.h
lib, /usr/include/aarch64-linux-gnu/NvInferLegacyDims.h
lib, /usr/include/aarch64-linux-gnu/NvUtils.h
lib, /usr/include/aarch64-linux-gnu/NvInferPlugin.h
lib, /usr/include/aarch64-linux-gnu/NvInferPluginUtils.h
lib, /usr/include/aarch64-linux-gnu/NvCaffeParser.h
lib, /usr/include/aarch64-linux-gnu/NvUffParser.h
lib, /usr/include/aarch64-linux-gnu/NvOnnxConfig.h
lib, /usr/include/aarch64-linux-gnu/NvOnnxParser.h
dir, /usr/lib/python3.6/dist-packages/tensorrt
dir, /usr/lib/python3.6/dist-packages/graphsurgeon
dir, /usr/lib/python3.6/dist-packages/uff
dir, /usr/lib/python3.6/dist-packages/onnx_graphsurgeon
dir, /usr/src/tensorrt

Based on the information in this page, libnvidia-container/mount_plugins.md at jetson · NVIDIA/libnvidia-container · GitHub, it should be able to mount all the necessary files from host to docker image, but it still failed.

Could anyone know how to solve this problem?

Hi @yili.han1, if you are building a dockerfile, then you need to set nvidia as the default docker runtime in order for these files to be mounted during docker build operations: https://github.com/dusty-nv/jetson-containers#docker-default-runtime

Otherwise, are you starting the container with --runtime nvidia?

Thanks very much for your reply.

if you are building a dockerfile, then you need to set nvidia as the default docker runtime in order for these files to be mounted during docker build operations

=>Yes, I followed your setting and build my docker image again and also run the docker with --runtime nvidia, but it still failed to mount tensorRT and cudnn to the docker image.

The package versions installed in my jetson tx2 are listed in the attachment.

Could it be the package version incompatibility issue? Because I saw someone mentioned here:
libcudart.so.10.2 is not available in tensorflow image · Issue #146 · dusty-nv/jetson-containers · GitHub.

Are you able to run these commands on your system?

sudo docker run -it --rm --net=host --runtime nvidia nvcr.io/nvidia/l4t-base:r32.6.1
python3 -c 'import tensorrt'

The second command should be run inside the container that the first command starts.

It showed "no module named tensorrt' as attachment shown.

OK, sorry that it appears that the mounting is not working correctly. Typically I would recommend reinstalling the nvidia-container-* packages or re-flashing the device to get those working again.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.