Hi,
I am working with Deepstream 6.0 and Jetpack 4.6/L4T 32.6.1. I am trying to set up Deepstream via the docker container, but when I run the container tensorrt, cuda, and cudnn are not mounted correctly in the container. I checked and I have the packages locally, but they do not get mounted correctly. Additionally, I need to use this Jetpack version and the accompanying Deepstream version.
I had to create the tensorrt.csv, cuda.csv, and cudnn.csv files manually under /etc/nvidia-container-runtime/host-files-for-container.d
. The only one that existed on boot was the l4t.csv file. The problem is that only the l4t.csv file is being mounted correctly, whereas the tensorrt.csv, cuda.csv, and cudnn.csv files are not being mounted. If I concatenate the other files under l4t.csv, then I am able to get all the packages mounted correctly, but it does not work if I have them separated into individual files such as l4t.csv, cuda.csv, tensorrt.csv, and cudnn.csv. Based on the mount plugin repo, the csv files I made should also be mounting.
I also adjusted /etc/docker/daemon.json
to ensure that default runtime was set to nvidia, but the issue still persists. Why would the other csv files not be mounting correctly? For reference, the tensorrt.csv file content is:
lib, /usr/lib/aarch64-linux-gnu/libnvinfer.so.8.0.1
lib, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.0.1
lib, /usr/lib/aarch64-linux-gnu/libnvparsers.so.8.0.1
lib, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8.0.1
sym, /usr/lib/aarch64-linux-gnu/libnvinfer.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvparsers.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvinfer.so
sym, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
sym, /usr/lib/aarch64-linux-gnu/libnvparsers.so
sym, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so
lib, /usr/include/aarch64-linux-gnu/NvInfer.h
lib, /usr/include/aarch64-linux-gnu/NvInferRuntime.h
lib, /usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h
lib, /usr/include/aarch64-linux-gnu/NvInferVersion.h
lib, /usr/include/aarch64-linux-gnu/NvInferImpl.h
lib, /usr/include/aarch64-linux-gnu/NvInferLegacyDims.h
lib, /usr/include/aarch64-linux-gnu/NvUtils.h
lib, /usr/include/aarch64-linux-gnu/NvInferPlugin.h
lib, /usr/include/aarch64-linux-gnu/NvInferPluginUtils.h
lib, /usr/include/aarch64-linux-gnu/NvCaffeParser.h
lib, /usr/include/aarch64-linux-gnu/NvUffParser.h
lib, /usr/include/aarch64-linux-gnu/NvOnnxConfig.h
lib, /usr/include/aarch64-linux-gnu/NvOnnxParser.h
dir, /usr/lib/python3.6/dist-packages/tensorrt
dir, /usr/lib/python3.6/dist-packages/graphsurgeon
dir, /usr/lib/python3.6/dist-packages/uff
dir, /usr/lib/python3.6/dist-packages/onnx_graphsurgeon
dir, /usr/src/tensorrt
My setup is below:
• Hardware Platform (Jetson / GPU) : Jetson Xavier NX
• DeepStream Version : 6.0
• JetPack Version : Jetpack 4.6 & L4T 32.6.1
• TensorRT Version : 8.0.1.6-1+cuda10.2
The cuda libraries I have are also below: