Nvidia container runtime and tensorrt

Description

I am creating a docker image inside am AGX XAVIER and using the nvcr.io/nvidia/l4t-base:r32.3.1 as my base image. The application that I will be running inside the docker image needs to use tensorrt 5.1.5, thus the reason for using the base version above. How can I install the correct tensorrt version. The correct tensorrt version is already installed on the host Xavier (JetPack 4.2.3) but I am not sure how to install it inside the docker image.

Is the tensorrt loaded automatically when I use the nvidia-runtime option when running the image? If not, is there any way to install it inside the docker image?

Environment

TensorRT Version: 5.1
GPU Type: AGX Xavier
Nvidia Driver Version:
CUDA Version: 10.0
CUDNN Version:
Operating System + Version: Ubuntu 18.04 (l4t)

The platform specific libraries and select device nodes for a particular device are mounted by the NVIDIA container runtime into the l4t-base container from the underlying host, thereby providing necessary dependencies for l4t applications to execute within the container.
Similarly, CUDA and TensorRT are ready to use within the l4t-base container as they are made available from the host by the NVIDIA container runtime.

Thanks

1 Like

That is good to know. Thank you for the clarification.