Is l4t base image independant from host jetapack version?

My question is that,
do l4t-base images install every libraries during docker-build regardless of what version of jepack the jetson has?

if my jetson device has jetpack with version l4t-A, when I use l4t-base image with version tag l4t-B.
are all libraries(such as cuda, cudnn, tensorrt …) installed inside docker-images separately? regardless of host-version.

point is that,

  1. When I need to use different versions of jetpack environments on the single jetson device,
    can l4t-base images make me free from re-install each version of jetpack everytime?

  2. I’m trying to build tensorrt-python wheel inside jetson.
    is the wheel compatible that built from another jetson device having different version of jetpack.

I don’t know about docker images, but the L4T version has a strong dependency with the GPU driver. CUDA version is tied to the L4T version and shouldn’t be mixed…the version which comes with a given JetPack/SDK Manager release should be used with the intended L4T release. Everything should be available within that one release, but if the version of some software is not what you need, then consider flashing a new L4T release using the JetPack/SDKM which comes with that release (or in the case of docker perhaps using a different docker image).

Note: You can install much software on the host PC’s “Linux_for_Tegra/rootfs/” and have it appear immediately after flashing. You would not want to install system libraries this way, but your own software (if it is linking to a version of library in the rootfs) can easily be added somewhere like in a home directory or “/usr/local” (note that you can add your users and passwords this way as well, then you’ll have a home directory and user already set up immediately upon flash).

1 Like

l4t-base container is dependent on the underlying JetPack version, because CUDA/cuDNN/TensorRT are mounted into the container dynamically at runtime when --runtime nvidia is used. These packages aren’t installed into the container, rather mounted from the host device (for more info about that, see here).

Hence the version of l4t-base container should match the version of JetPack - the exception is JetPack 4.5.1 (L4T R32.5.1), which uses the L4T R32.5.0 base container because the BSP didn’t change between those versions.

In the future we’ll be enabling different versions of l4t-base to run on JetPack, so container needn’t be recompiled for each JetPack version. But currently the version of CUDA/cuDNN/TensorRT is tied to the underlying version of L4T and the GPU driver.

1 Like

I got something curious.
When I build l4t-base image on arm-arch device(not jetson) without any cuda-libs.
but the built image contains /usr/local/cuda already itself.

It seems that l4t-base images has cuda headers inside it.
I think I can use these headers to compile my tensorrt wheel by adding trt-binaries from jetpack_files.

Can I guess like l4t-base image has cuda headers corresponding to l4t version?
I saw that below repo, Dockerfile.cuda has get cuda from apt-get

I haven’t used this other version of l4t-base before, but it appears this version of l4t-base installs CUDA inside the container. The normal version of l4t-base does not. If you are using the normal l4t-base, the TensorRT Python API will be mounted into the container at runtime, so you shouldn’t need to compile it yourself.