How to use TensorRT in container with python3 application?

Hi I want to use TensorRT in a docker container for my python3 app on my Jetson Nano device.
My setup is below;
NVIDIA Jetson Nano (Developer Kit Version)
L4T 32.3.1 [ JetPack 4.3 ]
Ubuntu 18.04.3 LTS
Kernel Version: 4.9.140-tegra
CUDA 10.0.326
CUDA Architecture: 5.3
OpenCV version: 4.1.1
OpenCV Cuda: NO
CUDNN: 7.6.3.28
TensorRT: 6.0.1.10
Vision Works: 1.6.0.500n
VPI: 0.1.0
Vulcan: 1.1.70

I use nvcr.io/nvidia/deepstream-l4t:5.0-20.07-samples image as base for my dockerfile(I did not use the latest deepstream image because of my LT4 and TensorRT versions on jetson are old).
I have already set default runtime to nvidia. But when I run “import tensorrt as trt” I get error ; “import tensorrt as trt ModuleNotFoundError: No module named ‘tensorrt’”. How can I make possible my python app to see TensorRT already installed on Jetson nano host?

Hi @hasever, you may need to upgrade/reflash your SD card with newer version of JetPack to get the TensorRT Python libraries in the containers. I believe more recent versions of JetPack automatically have the TensorRT Python libraries added to the containers.

1 Like

Thanks,
Is Jetpack 4.3 not enough for that ???
So TensorRT and some libraries are added to container from host Jetson-Nano, Am I right? How this mechanism works, can I see tensorrt after that upgrade in my container(for example at “pip3 list” )? Will I have to do further things like setting default runtime to nvidia??

I don’t recall that the CSV files under /etc/nvidia-container-runtime/host-files-for-container.d/ that are responsible mounting the host files into the container included the TensorRT Python libraries on the older versions of JetPack.

You could try adding these lines to /etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv if they aren’t already there:

dir, /usr/lib/python2.7/dist-packages/tensorrt
dir, /usr/lib/python2.7/dist-packages/graphsurgeon
dir, /usr/lib/python2.7/dist-packages/uff
dir, /usr/lib/python3.6/dist-packages/tensorrt
dir, /usr/lib/python3.6/dist-packages/graphsurgeon
dir, /usr/lib/python3.6/dist-packages/uff

Suffice it to say, if it fails to start then or isn’t work, recommend upgrading JetPack.

You should only need --runtime nvidia when you do docker run. Setting the default runtime to nvidia is when you need CUDA/ect when you are building Dockerfiles with docker build.

1 Like

Thanks again for your fast reply;
I want to run containers through k3s setup. I have set runtime of k3s to docker , after that I have set docker default runtime of jetson-nano to nvidia. So I assume that , by this setup I must have tensorRT, cuda/etc in my k3s pods/containers? Is it true?
I will try adding the lines that you have specified too.

If the Jetson(s) you are deploying have JetPack and CUDA/ect in the OS, then CUDA/ect will be mounted into all containers when --runtime nvidia is used (or in your case, the default runtime is nvidia)

In the DeepStream container, check to see if you can see /usr/src/tensorrt (this is also mounted from the host)
I think the TensorRT Python libraries were only added to the CSV mounting files later on.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.