Can nvidia-docker run on tx2?

Can nvidia-docker run on tx2?
I follow the guide on the github. But can’t install nvidia-docker on tx2?
Did it supported? If yes. how can i do?
Any doc?
Thanks.

Hi

nvidia-docker doesn’t support Jetson platform. You can find this information here:
[url]Frequently Asked Questions · NVIDIA/nvidia-docker Wiki · GitHub

But the official docker is working and here is a tutorial:
[url]https://github.com/Technica-Corporation/Tegra-Docker[/url]

Thanks.

The Tegra-Docker approach based on mounting a subset of the /dev/nv* devices no longer seems to work in Jetpack 4.2 / L4T R32.1.

What is the best way to do it now?

Thanks,

Geoff

To expand a bit if a I create a simple container with all the samples included using:

FROM arm64v8/ubuntu:16.04
ENV LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64
COPY NVIDIA_CUDA-10.0_Samples NVIDIA_CUDA-10.0_Samples
CMD /bin/bash

… and then run it mapping all the /dev/nv* devices:

$ docker run -it --device=/dev/nvhost-as-gpu --device=/dev/nvhost-ctrl --device=/dev/nvhost-ctrl-gpu --device=/dev/nvhost-ctrl-isp --device=/dev/nvhost-ctrl-nvcsi --device=/dev/nvhost-ctrl-nvdec --device=/dev/nvhost-ctxsw-gpu --device=/dev/nvhost-dbg-gpu --device=/dev/nvhost-gpu --device=/dev/nvhost-isp --device=/dev/nvhost-msenc --device=/dev/nvhost-nvcsi --device=/dev/nvhost-nvdec --device=/dev/nvhost-nvjpg --device=/dev/nvhost-prof-gpu --device=/dev/nvhost-sched-gpu --device=/dev/nvhost-tsec --device=/dev/nvhost-tsecb --device=/dev/nvhost-tsg-gpu --device=/dev/nvhost-vi --device=/dev/nvhost-vic --device=/dev/nvmap -v /usr/local/cuda-10.0:/usr/local/cuda-10.0 cuda_container

root@e06082804232:/# ./NVIDIA_CUDA-10.0_Samples/bin/aarch64/linux/release/deviceQuery
./NVIDIA_CUDA-10.0_Samples/bin/aarch64/linux/release/deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 35
→ CUDA driver version is insufficient for CUDA runtime version
Result = FAIL

… which suggests some part of the driver isn’t properly punched through?,

Geoff

Hi geoff.ballinger,

The nvidia-docker will be included with JetPack 4.2.1, to be released first week of July.

Thanks

THIS IS GREAT!!!EXCELLENT TIMING FOR MY PROJECT. THANK YOU NVIDIA.

I’ve built out a repo around productionizing Jetson containers that could help: GitHub - idavis/jetson-containers: Running CUDA containers on the Jetson platform

The README walks through a lot of what is required. It also shows how to build OpenCV and other libraries against specific versions of JetPack and devices.

Starting in JetPack 4.2 it mimics the official CUDA images for base, runtime, devel images.

Regards

Any updates on release date?

same here looking forward to it.

bump… have had my project on hold for 4 weeks now waiting for 4.2.1 to come out.

I had to revert to 3.3 in order to use the device/library pass-through method. Looking forward to using nvidia-docker instead.

I made it work in 4.2 with the help of the NVidia guys:

https://devtalk.nvidia.com/default/topic/1052660/jetson-tx2/accessing-the-gpu-from-docker-on-l4t-r32-1/post/5348132/#5348132

… which was a good thing since I am currently sitting on a train with it strapped in a shiny box on the underside so couldn’t wait for 4.2.1!

Geoff

@acwatkins @pcdangio @geoff.ballinger - 4.2 works fine without nvidia-docker and without volume mounting the host file system into the container. Please take a look at [url]https://github.com/idavis/jetson-containers[/url].

-Ian

Unfortunately I need the official NVIDIA Jetpack/L4T release… I’m using a ConnectTech carrier that has its own board support/flashing tools that are extended from the official NVIDIA releases.

@pcdangio I added CTI carrier support for JetPack 4.2, I just haven’t been able to verify it as I don’t have a carrier board: Support for Connect Tech BSPs · Issue #4 · idavis/jetson-containers · GitHub

The CTI BSP just adds some drivers and conf files eventually calling into the regular JetPack apply_binaries.sh. From there you can flash it with nvidia’s, CTI’s, or my wrapper around them.