L4t-tensorrt image for TRT 7 / jp45


Where can I find an l4t-tensorrt docker image for TRT 7 / JetPack 4.5, or can I built it myself ?

Similarly, are there any instructions on how to rebuild the l4t-tensorrt image for TRT 8 / JetPack 4.6 and the latest Python 3.8.xx version ?


Hi @charlesfr.rey, on JetPack 4.x you can just use l4t-base container, because CUDA/cuDNN/TensorRT/ect get mounted into the container from the host device by the NVIDIA Container Runtime on JetPack 4.x. So l4t-base should already have these components in it.

I believe the container image for this would be nvcr.io/nvidia/l4t-tensorrt:r8.2.1-runtime, however it would be with Python 3.6, as we build the images for the default version of Python that comes with the version of Ubuntu used (and on JetPack 4.x, that’s Ubuntu 18.04 and Python 3.6)

However, you may be able to rebuild the TensorRT Python bindings for Python 3.8, as some others on the forums have done:

Thanks, I have yet to try it, but it looks promising.

As a side note, I was able to run the [Preformatted text](http://nvcr.io/nvidia/l4t-tensorrt:r8.0.1) image on JetPack 4.5 which brings Python 3.8 and CUDA 10.2 support (4.6 and 4.6.1 also use CUDA 10.2), if you cannot upgrade the JetPack itself for whatever reason but need the newer Python version.

Of course the TensorRT version doesn’t match but for stuff like ONNX Runtime using only CUDA, it is perfect and allows to have only 1 base image in a mixed-versions fleet.

For reference, the trick with JetPack 4.5 is to define NVIDIA_DISABLE_REQUIRE=true to avoid the error that nvidia-container-cli throws otherwise : docker: Error response from daemon: failed to create shim: OCI runtime create failed ... nvidia-container-cli: requirement error: invalid expression: unknown.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.