EGX - `nvidia` docker runtime on Nano

https://www.nvidia.com/en-us/data-center/products/egx-edge-computing/

This was announced yesterday. Bolded is the particular pony I have been wishing for. Is there a beta image or apt package available for the nvidia docker runtime on Nano?

Hi mdegans,

it’s coming, and stay tuned to our announcements in the coming month. In the meantime, there are existing recipes for running GPU-accelerated workloads within standard Docker containers—it is a bit of additional work, but you can get started today using one of the community-supplied recipes such as this one: https://blog.hypriot.com/post/nvidia-jetson-nano-intro/

Thanks

Thanks for the timeline! I will wait for EGX and prototype any containers on x86 while I wait.

mdegans,

nvidia-docker will be included with JetPack 4.2.1, to be released first week of July. :- )

Thank you for the update! I appreciate it.

Awesome ! If you want some beta test for JetPack 4.2.1 , I’m available ;-)

Will Jetpack 4.2.1 arrive today as planned?

Earlier this week I saw them say “next week” in another post.
Note that yesterday was a major holiday in the US, and today a lot of people are taking the day off to make it a four-day holiday.

Guessing last minute commits break everything. It happened to my spouse right before the 4th. He was stuck on his laptop half the day as a result. I’m fine with the delay for 4.2.1 personally. Stuff happens, and I would rather it be stable.

I also want JetPack 4.2.1. Please :)

@prlawrence: Any update on the release date?

P.S.: While waiting for the Nvidia Docker runtime I automated provisioning Jetson devices to join Kubernetes clusters including automatically building a fitting kernel for weave networking and using the mount /dev/nv* approach to support cuda - see https://github.com/helmuthva/jetson

We are fixing a couple last bugs, so as you can tell the schedule has unfortunately slipped. We’ll update again when the release is finalized – JetPack 4.2.1 has lots of great updates and we look forward to sharing it.
https://devtalk.nvidia.com/default/topic/1055628/jetson-nano/new-jetson-software-modules-and-pricing/post/5360588/#5360588

Nice! I’m sure many others will want to try something like this, thanks for taking the time to document your work. :- )

@prlawrence: By now https://github.com/helmuthva/jetson contains a kubernetes deployment of a Jupyter server supporting CUDA accelerated Tensorflow running on Jetson nodes (such as Nanos and Xaviers).

To achieve this the container has to mount closed source libraries such as cudnn from the Jetson host, see https://github.com/helmuthva/jetson/blob/master/workflow/deploy/jupyter/kustomize/base/deployments.yaml and https://github.com/helmuthva/jetson/blob/master/workflow/deploy/jupyter/src/Dockerfile

Question: Is this the assumed way or will base images containing the cuda and cudnn libraries be provided as part of the Nvidia Docker runtime in Jetpack 4.2.1?

P.S.: For the ones interested in getting Anaconda running on Jetson - it’s part of the deployment, see the Dockerfile.

The “l4t-base” image will be very slim – libraries will be mounted from the host to the container. See:
https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-Container-Runtime-on-Jetson#mount-plugins

-v /usr/local/cuda:/usr/local/cuda

Is there a reason this isn’t this included in the image? I really don’t like to bind mount, in part because of container escapes, and in part becuase i’ve had issues using it with swarm.

Edit: Apparently there is a reason given in the link: image size, and the problem is being worked on

Edit, also:

Other than that, thank you for the beta. Very cool!

@prlawrence, @mdegans - sounds promising, thx for the input.

Btw: The page referenced by you contains typos/inconsistencies and i guess the “l4t-base” image is not yet published, see https://github.com/NVIDIA/nvidia-docker/issues/1013 ?

I’m happy to report that GPU accelerated Docker works fine on my Nano running Jetpack 4.2.1. As Helmut mentioned, there’s an issue with pulling the image from the Nvidia cloud, so I created my own images. If you’re interested, see here: https://github.com/jetsistant/docker-cuda-jetpack. I tried almost all CUDA samples in the devel container, they are all running flawlessly. Thanks for this great release!

The nvcr.io/nvidia/l4t-base:r32.1 image was published, see https://ngc.nvidia.com/catalog/containers/nvidia:l4t-base :-)

For the ones interested in running Jetson edge devices as part of a larger multi-platform Kubernetes cluster: I’m happy to report that JetPack 4.2.1 works with Kubernetes + Weave Networking just fine after minor tweaks. Thank you NVIDIA for this wonderful release :-)

Have a look at https://github.com/helmuthva/jetson/blob/master/workflow/provision/roles/kernel/files/.config for the necessary kernel configuration - or https://github.com/helmuthva/jetson for the bigger picture.