DockerFile for NVIDIA L4T TensorRT containers

Description

I am looking for the DockerFile used to create the following container: nvcr.io/nvidia/l4t-tensorrt:r8.2.1-runtime. Basically, I need to build my own image from this TensorRT image since it does not have essential libraries that I need (cmake, glog, etc).

There is this DockerFile: TensorRT/ubuntu-20.04-aarch64.Dockerfile at release/8.2 · NVIDIA/TensorRT · GitHub but it is not the same TensorRT version and it does not seem to be the same thing since this one actually installs cmake whereas when I use nvcr.io/nvidia/l4t-tensorrt:r8.2.1-runtime, I get cmake: command not found.

Environment

*Container (if container which image + tag)**: nvcr.io/nvidia/l4t-tensorrt:r8.2.1-runtime

Steps To Reproduce

Docker Pull nvcr.io/nvidia/l4t-tensorrt:r8.2.1-runtime, then try to build any files using cmake.

Thanks!

Hi,
Please refer to the installation steps from the below link if in case you are missing on anything

Also, we suggest you to use TRT NGC containers to avoid any system dependency related issues.

Thanks!

Thanks for your quick reply, I actually did check the first website. But what actually is the TensorRT container link that you just sent? it does not have the TensorRT versions that I am used to seeing (it goes from 17 to 23) and it uses python3?

I am actually using these containers: NVIDIA L4T TensorRT | NVIDIA NGC because my Jetson is running on L4T. My main issue with the containers here are that they don’t include cmake for some reason so I just wanted to double check the Dockerfile used to create the images, in particular for http://nvcr.io/nvidia/l4t-tensorrt:r8.2.1-runtime.

The main I have is that after running the Dockerfile:

FROM nvcr.io/nvidia/l4t-tensorrt:r8.2.1-runtime
RUN apt-get update && apt-get install -y
libgoogle-glog-dev
build-essential
sudo-ldap
RUN cd /tmp &&
wget https://github.com/Kitware/CMake/releases/download/v3.21.4/cmake-3.21.4-linux-aarch64.sh &&
chmod +x cmake-3.21.4-linux-aarch64.sh &&
./cmake-3.21.4-linux-aarch64.sh --prefix=/usr/local --exclude-subdir --skip-license &&
rm ./cmake-3.21.4-linux-aarch64.sh

and I try to use cmake, I get the following error:

CMake Error at /usr/local/share/cmake-3.21/Modules/CMakeTestCCompiler.cmake:69 (message):
The C compiler
“/usr/bin/cc”
is not able to compile a simple test program.
It fails with the following output:
Change Dir: /workspace/inference/build/CMakeFiles/CMakeTmp
Run Build Command(s):/usr/bin/make -f Makefile cmTC_b43c9/fast && /usr/bin/make -f CMakeFiles/cmTC_b43c9.dir/build.make CMakeFiles/cmTC_b43c9.dir/build
make[1]: Entering directory ‘/workspace/inference/build/CMakeFiles/CMakeTmp’
Building C object CMakeFiles/cmTC_b43c9.dir/testCCompiler.c.o
/usr/bin/cc -o CMakeFiles/cmTC_b43c9.dir/testCCompiler.c.o -c /workspace/inference/build/CMakeFiles/CMakeTmp/testCCompiler.c
cc: error trying to exec ‘cc1’: execvp: No such file or directory
CMakeFiles/cmTC_b43c9.dir/build.make:77: recipe for target ‘CMakeFiles/cmTC_b43c9.dir/testCCompiler.c.o’ failed
make[1]: *** [CMakeFiles/cmTC_b43c9.dir/testCCompiler.c.o] Error 1
make[1]: Leaving directory ‘/workspace/inference/build/CMakeFiles/CMakeTmp’
Makefile:127: recipe for target ‘cmTC_b43c9/fast’ failed
make: *** [cmTC_b43c9/fast] Error 2

Hi @bayab ,
Here you can find the contents of the image of TRT 8.2.1, and I believe you can create a Dockerfile referring the one you have shared.

Thanks

Thanks! Can you explain the difference between these TensorRT containers:

TensorRT | NVIDIA NGC (the one you pointed me to)

and these ones:

NVIDIA L4T TensorRT | NVIDIA NGC ?

Can I still run the containers in here TensorRT | NVIDIA NGC if my Jetson Xavier runs on L4T?

Thanks

Hi @bayab

I believe this can be done.
However i will re-validate this .
Thanks