Docker container with CUDA, CUDNN, OPENCV installed on Jetson Xavier

Hi guys,

Is there any nvidia docker image avaliable so far for Jetson Xavier and have CUDA, CUDNN, OPENCV installed already?
I’m trying to run some object detection task in docker container on Xavier, but I can’t find a suitable container image.
Or how can I install CUDA, CUDNN and OPENCV in a clean docker image based on nvcr.io/nvidia/l4t-base:r32.3.1?

Thanks very much for every suggestion.

I’ve tried to share cudnn.h with docker volume, but inside docker container it will become an empty file. I don’t understand why.

I am also trying to build OpenCV with CUDA and CuDNN in docker environment on Jetson Nano based on nvcr.io/nvidia/l4t-ml:r32.4.4-py3. But it was not built. I got this error.
Can anyone from Nvidia help us with the issue?

/usr/lib/aarch64-linux-gnu/libcublas.so: file not recognized: File truncated
collect2: error: ld returned 1 exit status
make[2]: *** [lib/libopencv_cudev.so.4.5.0] Error 1
modules/cudev/CMakeFiles/opencv_cudev.dir/build.make:95: recipe for target 'lib/libopencv_cudev.so.4.5.0' failed
CMakeFiles/Makefile2:2962: recipe for target 'modules/cudev/CMakeFiles/opencv_cudev.dir/all' failed
make[1]: *** [modules/cudev/CMakeFiles/opencv_cudev.dir/all] Error 2
Makefile:162: recipe for target 'all' failed
make: *** [all] Error 2
Make did not successfully build

What is the status of this issue? The L4T base images seem completely broken with a lot of zero length files and things stuck in /etc/alternatives without any rhyme or reason?

Things like nvidia-l4t-core, nvidia-l4t-gstreamer, cudnn, etc. etc. are all either partially installed or just broken. How are we supposed to port applications using these images?

Hi @burakteke, run the container with --runtime nvidia or set your default docker-runtime to nvidia if you need this during docker build operations: https://github.com/dusty-nv/jetson-containers#docker-default-runtime

Also, the latest l4t-ml:r32.5.0 has the version of OpenCV from JetPack installed inside it.

@alexander.sack these files will not be zero-length if the container is run with --runtime nvidia. These files are mounted from the device: https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-Container-Runtime-on-Jetson#mount-plugins

If you need these files during docker build operations, then set your default docker-runtime to nvidia like shown here: https://github.com/dusty-nv/jetson-containers#docker-default-runtime

You do not need to install the nvidia-jetpack apt packages inside the container because these are mounted from the device in order to reduce container size for embedded systems.

But that means the host Jetson system has to have all of these NVIDIA development binaries preinstalled to potentially build a docker image? Why? That defeats the whole purpose.

@alexander.sack it’s the standard JetPack load that would already be on the Jetson device under normal circumstances after it is flashed. Currently you have you run the same version of CUDA that came with JetPack-L4T, so the containers would all be using the same version of CUDA anyways.

In the future, we plan to have the option of the ‘fat’ containers when the CUDA version is able to be decoupled from the underlying JetPack-L4T version.

But that’s not true! Xavier NX’s JP4.5 SD card image does not have these files in it. I had to manually install everything.

Which files? Both the Nano and NX SD card images come with the JetPack components preinstalled in the image including CUDA/cuDNN/TensorRT/VisionWorks/VPI/OpenCV/ect.

They come maybe with the run-time files but what about dev files (headers, etc.) to build things like OpenCV?

I installed JP4.5 on my NX devkit and I didn’t have any of these development files.

Yes it comes with the headers. For example, CUDA headers should already be under /usr/local/cuda/include. cuDNN and TensorRT headers are under /usr/include/aarch64-linux-gnu. cuBLAS headers are under /usr/include

Ok, you are correct (I was thinking of bare image I had on my TX2).

So the general workflow to build things is to:

  • Flash the host with the stock image
  • Make the nvidia-container-runtime the default runtime or specify it on the command line (–runtime)
  • The nvidia-container-runtime will essentially mount these important files into your container both at docker build and run time.

I do hope that NVIDIA reconsiders this a bit - I would much rather install the runtime components my application needs (container) than having to first “setup the host” to run a container

If you want to files available during build-time (i.e. during docker build), then you must make your default-runtime nvidia because --runtime is not an option to docker build. If you set your default runtime though (like I linked to above), docker build will use that runtime instead.

As mentioned, in the future we do plan to enable the traditional style of containers once some changes to the driver infrastructure are made that would permit it.