No nvidia-l4t-base image for L4T version 32.2.3

Hello,

I am currently trying to build a docker image that can run pytorch inference on my Jetson Nanos, which are on L4T version 32.2.3. I see that the following base images exist at this link:

  • nvcr.io/nvidia/l4t-base:r32.2.1
  • nvcr.io/nvidia/l4t-base:r32.3.1

Is there a version of l4t-base that exists for devices on L4T 32.2.3? If not, what is the recommended approach for either building a compatible base image myself or remotely upgrading L4T on my devices?

Hi,

Sorry for the late update.

For l4t-based container, all the available image can be found here:
https://ngc.nvidia.com/catalog/containers/nvidia:l4t-base/tags

Tags

r32.4.2 …

r32.3.1 …

r32.2.1 …

r32.2 …

Do you have any dependencies on r32.2.3?
If not, it’s recommended to use this image which is based on latest r32.4.2.
https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch

It have pyTorch pre-installed so you don’t need to build it on your own.
Thanks.

If the host device is on L4T 32.2.3 but I try to build with the base image which is based on 32.4.2, won’t there be issues? Doesn’t 32.4.2 rely on CUDA 10.2 being on the host while, on a device running 32.2.3, CUDA 10.0 is installed?

Hi,

You will need to use the same OS version between host and container.
That’s because some libraries are directly mounted from the host volume.
A different version will cause compatibility issue.

Suppose you can reflash your Nano into r32.4.2 (JetPack4.4) and use the latest image.
Thanks.