EGX - `nvidia` docker runtime on Nano

This was announced yesterday. Bolded is the particular pony I have been wishing for. Is there a beta image or apt package available for the nvidia docker runtime on Nano?

Hi mdegans,

it’s coming, and stay tuned to our announcements in the coming month. In the meantime, there are existing recipes for running GPU-accelerated workloads within standard Docker containers—it is a bit of additional work, but you can get started today using one of the community-supplied recipes such as this one:


Thanks for the timeline! I will wait for EGX and prototype any containers on x86 while I wait.


nvidia-docker will be included with JetPack 4.2.1, to be released first week of July. :- )

Thank you for the update! I appreciate it.

Awesome ! If you want some beta test for JetPack 4.2.1 , I’m available ;-)

Will Jetpack 4.2.1 arrive today as planned?

Earlier this week I saw them say “next week” in another post.
Note that yesterday was a major holiday in the US, and today a lot of people are taking the day off to make it a four-day holiday.

Guessing last minute commits break everything. It happened to my spouse right before the 4th. He was stuck on his laptop half the day as a result. I’m fine with the delay for 4.2.1 personally. Stuff happens, and I would rather it be stable.

I also want JetPack 4.2.1. Please :)

@prlawrence: Any update on the release date?

P.S.: While waiting for the Nvidia Docker runtime I automated provisioning Jetson devices to join Kubernetes clusters including automatically building a fitting kernel for weave networking and using the mount /dev/nv* approach to support cuda - see

We are fixing a couple last bugs, so as you can tell the schedule has unfortunately slipped. We’ll update again when the release is finalized – JetPack 4.2.1 has lots of great updates and we look forward to sharing it.

Nice! I’m sure many others will want to try something like this, thanks for taking the time to document your work. :- )

@prlawrence: By now contains a kubernetes deployment of a Jupyter server supporting CUDA accelerated Tensorflow running on Jetson nodes (such as Nanos and Xaviers).

To achieve this the container has to mount closed source libraries such as cudnn from the Jetson host, see and

Question: Is this the assumed way or will base images containing the cuda and cudnn libraries be provided as part of the Nvidia Docker runtime in Jetpack 4.2.1?

P.S.: For the ones interested in getting Anaconda running on Jetson - it’s part of the deployment, see the Dockerfile.

The “l4t-base” image will be very slim – libraries will be mounted from the host to the container. See:

-v /usr/local/cuda:/usr/local/cuda

Is there a reason this isn’t this included in the image? I really don’t like to bind mount, in part because of container escapes, and in part becuase i’ve had issues using it with swarm.

Edit: Apparently there is a reason given in the link: image size, and the problem is being worked on

Edit, also:

Other than that, thank you for the beta. Very cool!

@prlawrence, @mdegans - sounds promising, thx for the input.

Btw: The page referenced by you contains typos/inconsistencies and i guess the “l4t-base” image is not yet published, see ?

I’m happy to report that GPU accelerated Docker works fine on my Nano running Jetpack 4.2.1. As Helmut mentioned, there’s an issue with pulling the image from the Nvidia cloud, so I created my own images. If you’re interested, see here: I tried almost all CUDA samples in the devel container, they are all running flawlessly. Thanks for this great release!

The image was published, see :-)

For the ones interested in running Jetson edge devices as part of a larger multi-platform Kubernetes cluster: I’m happy to report that JetPack 4.2.1 works with Kubernetes + Weave Networking just fine after minor tweaks. Thank you NVIDIA for this wonderful release :-)

Have a look at for the necessary kernel configuration - or for the bigger picture.