TensorRT with nvcr.io/nvidia/l4t-base:r32.3.1 docker image

Hello,

I’m using nvcr.io/nvidia/l4t-base:r32.3.1 docker image from NVIDIA L4T Base | NVIDIA NGC and running it using command:

sudo docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-base:r32.3.1

My understanding is that I should have TensorRT available inside the container after that. See below text from the link above but when I check I don’t see it. Any ideas why?

Similarly, CUDA and TensorRT are ready to use within the l4t-base container as they are made available from the host by the NVIDIA container runtime.

Thanks.

Hi,

The docker container only have L4T base and CUDA installed.
If you want to have TensorRT preinstalled, please check this one:
https://ngc.nvidia.com/catalog/containers/nvidia:deepstream-l4t

Thanks

Thanks @Aastall. Tried that but don’t see TensorRT in that one either. Run docker using following command and then run

sudo docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:4.0.2-19.12-samples
dpkg -l | grep TensorRT

or

dpkg -l | grep nvinfer

and get nothing. What am I doing wrong?

1 Like

Having similar issues, also I have a device with JetPack 4.2, which has TRT5, is that possible to uprade TRT5 to TRT7 but stay with JetPack 4.2?

Hi, it isn’t possible to upgrade TensorRT independently of the JetPack-L4T version, as TensorRT has dependencies on the underlying versions of CUDA, cuDNN, and the L4T drivers in JetPack. TensorRT 7 will be included in the next release of JetPack, so please stay tuned.

I see, thanks!

We’re using a third party industrial level Xavier, we have to wait for them to release JetPack 4.3. We want to see if there’s a way to use docker so we can use newer version of TRT asap. I’m trying to build a docker according to this repo: https://github.com/BouweCeunen/computer-vision-jetson-nano and trying to install the TRT6 deb instead of the current TRT5 deb and see what will happen. Do you mean it won’t work even we have the docker with related cuda, cuDNN and L4T dependency?

Hi,

Here is a container with TensorRT pre-installed.
Would you mind to check if this meet your requirement first?
https://ngc.nvidia.com/catalog/containers/nvidia:deepstream-l4t

Thanks.

is there a way to redirect input to ssh session somehow?

./nbody 
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
	-fullscreen       (run n-body simulation in fullscreen mode)
	-fp64             (use double precision floating point values for simulation)
	-hostmem          (stores simulation data in host memory)
	-benchmark        (run benchmark to measure performance) 
	-numbodies=<N>    (number of bodies (>= 1) to run in simulation) 
	-device=<d>       (where d=0,1,2.... for the CUDA device to use)
	-numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
	-compare          (compares simulation results running once on the default GPU and once on the CPU)
	-cpu              (run n-body simulation on the CPU)
	-tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
X11 connection rejected because of wrong authentication.
freeglut (./nbody): failed to open display 'localhost:12.0'

using ssh -X to access the nano from Host PC
found a solution: https://devtalk.nvidia.com/default/topic/1057954/jetson-nano/jetson-nano-headless-rendering/post/5365604/#5365604