For jetsons, the container prepared for working with cuda+cudnn+tensorRT is But “NVIDIA container runtime into the l4t-base container from the underlying host, thereby providing necessary dependencies for l4t applications to execute within the container.” . That means the container is CUDA agnostic, it trust in the cuda versions I’ve got at the host and it just copy them. But…

What happens if I want to develop with another cuda version, or even having different cuda containers with different versions at each one of them?

Then I’ve seen others like which are specific, and they are what I am looking for, but, are they compatible with jetsons? If not, can this problem be solved with any workaround?


The base image mounts library from the host to save space.
We also have container that includes CUDA or TensorRT:

CUDA: l4t-cuda
CUDA+cuDNN+TensorRT: l4t-tensorrt


1 Like

Thanks for the tip of tensorRT inside container. But I miss at least a previous version of tensorRT. Why is there only one tag? How could I uninstall a version and install an older one in arm64? I mean sdk site with archives is not for arm64,and I cannot find previous versions of the debian package.