This was announced yesterday. Bolded is the particular pony I have been wishing for. Is there a beta image or apt package available for the nvidia docker runtime on Nano?
it’s coming, and stay tuned to our announcements in the coming month. In the meantime, there are existing recipes for running GPU-accelerated workloads within standard Docker containers—it is a bit of additional work, but you can get started today using one of the community-supplied recipes such as this one: [url]https://blog.hypriot.com/post/nvidia-jetson-nano-intro/[/url]
Earlier this week I saw them say “next week” in another post.
Note that yesterday was a major holiday in the US, and today a lot of people are taking the day off to make it a four-day holiday.
Guessing last minute commits break everything. It happened to my spouse right before the 4th. He was stuck on his laptop half the day as a result. I’m fine with the delay for 4.2.1 personally. Stuff happens, and I would rather it be stable.
P.S.: While waiting for the Nvidia Docker runtime I automated provisioning Jetson devices to join Kubernetes clusters including automatically building a fitting kernel for weave networking and using the mount /dev/nv* approach to support cuda - see https://github.com/helmuthva/jetson
@prlawrence: By now https://github.com/helmuthva/jetson contains a kubernetes deployment of a Jupyter server supporting CUDA accelerated Tensorflow running on Jetson nodes (such as Nanos and Xaviers).
Question: Is this the assumed way or will base images containing the cuda and cudnn libraries be provided as part of the Nvidia Docker runtime in Jetpack 4.2.1?
P.S.: For the ones interested in getting Anaconda running on Jetson - it’s part of the deployment, see the Dockerfile.
Is there a reason this isn’t this included in the image? I really don’t like to bind mount, in part because of container escapes, and in part becuase i’ve had issues using it with swarm.
Edit: Apparently there is a reason given in the link: image size, and the problem is being worked on
Edit, also:
Other than that, thank you for the beta. Very cool!
I’m happy to report that GPU accelerated Docker works fine on my Nano running Jetpack 4.2.1. As Helmut mentioned, there’s an issue with pulling the image from the Nvidia cloud, so I created my own images. If you’re interested, see here: https://github.com/jetsistant/docker-cuda-jetpack. I tried almost all CUDA samples in the devel container, they are all running flawlessly. Thanks for this great release!
For the ones interested in running Jetson edge devices as part of a larger multi-platform Kubernetes cluster: I’m happy to report that JetPack 4.2.1 works with Kubernetes + Weave Networking just fine after minor tweaks. Thank you NVIDIA for this wonderful release :-)