Access CSI in Docker container

Hello, I am working with a Jetson Xavier NX, and I created a docker image based on nvcr.io/nvidia/l4t-base:r32.4.4 . Now I would like to access a picam v2 connected using CSI from a python script. Anyway I cannot do that, since I don’t have GStream ON in my opencv installation. Now I was wandering if there is an official image like /nvidia/dli/dli-nano-ai:v2.0.1-r32.4.4 (That is actually working for me) for the Xavier that has opencv with all/most of the features enable (Gstream, cuDNN…).
Or is there a way to use the opencv installation that is installed on the jetpack4.4 that is running as OS on the Jetson?

Hi @polivicio, if you look at this section of the JetBot Dockerfile, it installs the OpenCV from JetPack into the container that has GStreamer enabled:

This is how the dli-nano container and the JetBot container install JetPack’s OpenCV.

If you wanted to build OpenCV with CUDA/cuDNN/ect enabled in your container, you could run @mdegans OpenCV build script in your Dockerfile: GitHub - mdegans/nano_build_opencv: Build OpenCV on Nvidia Jetson Nano

1 Like

Thanks for the link, @dusty_nv

@polivicio

There is a docker branch of the OpenCV build script with a sample Dockerfile. If you’re making something OpenCV based, it might be easiest to build that image and use it as a base, or modify the Dockerfile to your needs. You’ll likely have to build it yourself since I haven’t build and pushed a new version since the GA release of JetPack 4.4.

At a minimum you may with to modify the jetpack version at the top of the docker build script to match the current JetPack base image tag. You can also modify the OpenCV version in the same place. (eg. "4.4.0" instead of master)

@dusty_nv I’ve already tried the bash script for the installation but I got:

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:

and a long list of libraries like CUDA_CUDA_LIBRARY, CUDA_cublas_LIBRARY

and then :

Configuring incomplete, errors occurred!
See also "/tmp/build_opencv/opencv/build/CMakeFiles/CMakeOutput.log".
See also "/tmp/build_opencv/opencv/build/CMakeFiles/CMakeError.log".
make: *** No targets specified and no makefile found.  Stop.
make: *** No rule to make target 'install'.  Stop.
Removing intermediate container 813dc4cd6a99

Are you running this inside Docker? If so, set your default docker-runtime to nvidia as shown here:

https://github.com/dusty-nv/jetson-containers#docker-default-runtime

This will enable you to use CUDA while you are building Docker containers.

If you were running this script on your Jetson (not in a container), are you sure that the CUDA toolkit is installed (under /usr/local/cuda)? Did you flash your Xavier NX’s SD card with the SD card image? That SD card image already comes with CUDA/cuDNN/ect pre-installed.

1 Like

@dusty_nv Yes I am running the script inside Docker on the Xavier NX. I see that on the board cuda is installed. Checking the runtime in /etc/docker/deamon.json it has “default-runtime”: “nvidia” , but still the same configuration error:

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_cublas_LIBRARY (ADVANCED)
...
CUDA_cufft_LIBRARY (ADVANCED)
...
CUDA_nppc_LIBRARY (ADVANCED)
...
CUDA_nppial_LIBRARY (ADVANCED)
...
CUDA_nppicc_LIBRARY (ADVANCED)
...
CUDA_nppicom_LIBRARY (ADVANCED)
...
CUDA_nppidei_LIBRARY (ADVANCED)
...
CUDA_nppif_LIBRARY (ADVANCED)
...
CUDA_nppig_LIBRARY (ADVANCED)
...
CUDA_nppim_LIBRARY (ADVANCED)
...
CUDA_nppist_LIBRARY (ADVANCED)
...
CUDA_nppisu_LIBRARY (ADVANCED)
...
CUDA_nppitc_LIBRARY (ADVANCED)
...
CUDA_npps_LIBRARY (ADVANCED)
...
-- Configuring incomplete, errors occurred!
See also "/tmp/build_opencv/opencv/build/CMakeFiles/CMakeOutput.log".
See also "/tmp/build_opencv/opencv/build/CMakeFiles/CMakeError.log".
make: *** No targets specified and no makefile found.  Stop.

I am running the script from a Docker container on the Xavier. The base image that I am using is nvcr.io/nvidia/l4t-base:r32.4.4

Hmm, I just tried the same thing, and was able to get past these config steps without error. All I had to do was remove references to sudo in build_opencv.sh. FYI I ran ./build_opencv.sh 4.4.0

Can you check that you have these libraries present, both from inside and outside the container?

# ls /usr/local/cuda/lib64

libcudadevrt.a                libcusolver.so            libnppicom_static.a      libnppitc.so
libcudart.so                  libcusolver.so.10         libnppidei.so            libnppitc.so.10
libcudart.so.10.2             libcusolver.so.10.3.0.89  libnppidei.so.10         libnppitc.so.10.2.1.89
libcudart.so.10.2.89          libcusolver_static.a      libnppidei.so.10.2.1.89  libnppitc_static.a
libcudart_static.a            libcusparse.so            libnppidei_static.a      libnpps.so
libcufft.so                   libcusparse.so.10         libnppif.so              libnpps.so.10
libcufft.so.10                libcusparse.so.10.3.1.89  libnppif.so.10           libnpps.so.10.2.1.89
libcufft.so.10.1.2.89         libcusparse_static.a      libnppif.so.10.2.1.89    libnpps_static.a
libcufft_static.a             liblapack_static.a        libnppif_static.a        libnvToolsExt.so
libcufft_static_nocallback.a  libmetis_static.a         libnppig.so              libnvToolsExt.so.1
libcufftw.so                  libnppc.so                libnppig.so.10           libnvToolsExt.so.1.0.0
libcufftw.so.10               libnppc.so.10             libnppig.so.10.2.1.89    libnvgraph.so
libcufftw.so.10.1.2.89        libnppc.so.10.2.1.89      libnppig_static.a        libnvgraph.so.10
libcufftw_static.a            libnppc_static.a          libnppim.so              libnvgraph.so.10.2.89
libcuinj64.so                 libnppial.so              libnppim.so.10           libnvgraph_static.a
libcuinj64.so.10.2            libnppial.so.10           libnppim.so.10.2.1.89    libnvperf_host.so
libcuinj64.so.10.2.89         libnppial.so.10.2.1.89    libnppim_static.a        libnvperf_target.so
libculibos.a                  libnppial_static.a        libnppist.so             libnvrtc-builtins.so
libcupti.so                   libnppicc.so              libnppist.so.10          libnvrtc-builtins.so.10.2
libcupti.so.10.2              libnppicc.so.10           libnppist.so.10.2.1.89   libnvrtc-builtins.so.10.2.89
libcupti.so.10.2.75           libnppicc.so.10.2.1.89    libnppist_static.a       libnvrtc.so
libcurand.so                  libnppicc_static.a        libnppisu.so             libnvrtc.so.10.2
libcurand.so.10               libnppicom.so             libnppisu.so.10          libnvrtc.so.10.2.89
libcurand.so.10.1.2.89        libnppicom.so.10          libnppisu.so.10.2.1.89   stubs
libcurand_static.a            libnppicom.so.10.2.1.89   libnppisu_static.a
# ls /usr/lib/aarch64-linux-gnu/libcu*

/usr/lib/aarch64-linux-gnu/libcublas.so                    /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8
/usr/lib/aarch64-linux-gnu/libcublas.so.10                 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcublas.so.10.2.2.89          /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer_static_v8.a
/usr/lib/aarch64-linux-gnu/libcublasLt.so                  /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8
/usr/lib/aarch64-linux-gnu/libcublasLt.so.10               /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcublasLt.so.10.2.2.89        /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train_static_v8.a
/usr/lib/aarch64-linux-gnu/libcuda.so                      /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so
/usr/lib/aarch64-linux-gnu/libcuda.so.1                    /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8
/usr/lib/aarch64-linux-gnu/libcuda.so.1.1                  /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn.so                     /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer_static_v8.a
/usr/lib/aarch64-linux-gnu/libcudnn.so.8                   /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.0.0               /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so           /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.0.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8         /usr/lib/aarch64-linux-gnu/libcudnn_ops_train_static_v8.a
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.0.0     /usr/lib/aarch64-linux-gnu/libcudnn_static_v8.a
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer_static_v8.a  /usr/lib/aarch64-linux-gnu/libcups.so.2
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so           /usr/lib/aarch64-linux-gnu/libcurl-gnutls.so.3
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8         /usr/lib/aarch64-linux-gnu/libcurl-gnutls.so.4
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.0.0     /usr/lib/aarch64-linux-gnu/libcurl-gnutls.so.4.5.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train_static_v8.a  /usr/lib/aarch64-linux-gnu/libcurl.so.4
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so           /usr/lib/aarch64-linux-gnu/libcurl.so.4.5.0

@dusty_nv I have all of them, that is really weird, I am trying again with 4.4.0 from the inside of the container. I’ll keep you update.

@dusty_nv it is now working! I guess the problems was as you mentioned the runtime, I probably forgot to restart the docker deamon after changing the daemon.json.
Thank you!

@polivicio

If anybody is interested in a solution that does not require modifications to the default runtime, I just updated the docker branch of my opencv script and pushed the built images to docker hub. To run, simply:

 $ sudo docker run -it --runtime nvidia registry.hub.docker.com/mdegans/tegra-opencv:latest
root@44004ed2b1f6:/usr/local/src/build_opencv# python3
Python 3.6.9 (default, Oct  8 2020, 12:12:24) 
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.cuda.printCudaDeviceInfo(0)
*** CUDA Device Query (Runtime API) version (CUDART static linking) *** 

Device count: 1

Device 0: "Xavier"
  CUDA Driver Version / Runtime Version          10.20 / 10.20
  CUDA Capability Major/Minor version number:    7.2
  Total amount of global memory:                 7772 MBytes (8149061632 bytes)
  GPU Clock Speed:                               1.11 GHz
  Max Texture Dimension Size (x,y,z)             1D=(131072), 2D=(131072,65536), 3D=(16384,16384,16384)
  Max Layered Texture Size (dim) x layers        1D=(32768) x 2048, 2D=(32768,32768) x 2048
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per block:           1024
  Maximum sizes of each dimension of a block:    1024 x 1024 x 64
  Maximum sizes of each dimension of a grid:     2147483647 x 65535 x 65535
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and execution:                 Yes with 1 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Concurrent kernel execution:                   Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support enabled:                No
  Device is using TCC driver mode:               No
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           0 / 0
  Compute Mode:
      Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) 

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version  = 10.20, CUDA Runtime Version = 10.20, NumDevs = 1

>>> 

Build instructions are in the repo.

Please report any issues here:

1 Like