Docker image with python support for OpenCV, TensorRT and PyCuda

Hi,

I just started playing around with the Nvidia Container Runtime on Jetson, and the l4t-base image. I currently have some applications written in Python that require OpenCV, pyCuda and TensorRT. I am trying to understand the best method for making them work inside the container.

I understand that the CUDA/TensorRT libraries are being mounted inside the container, however the Python API bindings do not seem to be installed. What is the best method for accessing these APIs from Python inside the container? Do I have to rebuild TensorRT inside the container, or can these bindings be installed easily from a pre-compiled package?

I have the same question regarding OpenCV, what is the best method for installing the package for Python inside the container? Is there a way to mount these libraries, or should I try building OpenCV from source or a pre-compiled l4t package inside the container?

Thanks in advance for any suggestions!

Hi,

You can achieve this by coping the package and install it.

COPY ./libnvinfer5_5.1.6-1+cuda10.0_arm64.deb /tmp/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb
RUN dpkg -i ./libnvinfer5_5.1.6-1+cuda10.0_arm64.deb
...

Thanks.

Hi,

Thank you for the quick response. I was able to locate the package in my host computer in the cached SDK Manager downloads directory.

~/Downloads/nvidia/sdkm_downloads/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb

Is there a way to download this package directly from the TX2? I tried a simple wget, but it seem to protected.

Thank you

I just tried to copy the package and build the image, but I get the following error:

Dockerfile:

FROM nvcr.io/nvidia/l4t-base:r32.2

COPY ./libnvinfer5_5.1.6-1+cuda10.0_arm64.deb /tmp/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb
RUN dpkg -i /tmp/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb

build command:

docker build --tag test .

Error:

Sending build context to Docker daemon  37.73MB
Step 1/3 : FROM nvcr.io/nvidia/l4t-base:r32.2
 ---> e3c56b78e93f
Step 2/3 : COPY ./libnvinfer5_5.1.6-1+cuda10.0_arm64.deb /tmp/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb
 ---> Using cache
 ---> 32da584fe16c
Step 3/3 : RUN dpkg -i /tmp/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb
 ---> Running in b1569670ed3b
Selecting previously unselected package libnvinfer5.
(Reading database ... 40568 files and directories currently installed.)
Preparing to unpack .../libnvinfer5_5.1.6-1+cuda10.0_arm64.deb ...
Unpacking libnvinfer5 (5.1.6-1+cuda10.0) ...
dpkg: dependency problems prevent configuration of libnvinfer5:
 libnvinfer5 depends on libcudnn7; however:
  Package libcudnn7 is not installed.
 libnvinfer5 depends on cuda-cublas-10-0; however:
  Package cuda-cublas-10-0 is not installed.
 libnvinfer5 depends on cuda-cudart-10-0; however:
  Package cuda-cudart-10-0 is not installed.

dpkg: error processing package libnvinfer5 (--install):
 dependency problems - leaving unconfigured
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Errors were encountered while processing:
 libnvinfer5
The command '/bin/sh -c dpkg -i /tmp/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb' returned a non-zero code: 1

I was under the impression that some of these packages came pre-installed with the l4t-base:32.2 image, or were mounted into the container at runtime. So I also tried copying the package into the container and then installing inside.

Dockerfile:

FROM nvcr.io/nvidia/l4t-base:r32.2

COPY ./libnvinfer5_5.1.6-1+cuda10.0_arm64.deb /tmp/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb
#RUN dpkg -i /tmp/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb

build command:

docker build --tag test .

Start container using nvidia runtime:

docker run -it --rm --runtime nvidia test

Installation attempt and output:

dpkg -i /tmp/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb
Selecting previously unselected package libnvinfer5.
(Reading database ... 40568 files and directories currently installed.)
Preparing to unpack .../libnvinfer5_5.1.6-1+cuda10.0_arm64.deb ...
Unpacking libnvinfer5 (5.1.6-1+cuda10.0) ...
dpkg: error processing archive /tmp/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb (--install):
 unable to make backup link of './usr/lib/aarch64-linux-gnu/libnvinfer.so.5.1.6' before installing new version: Invalid cross-device link
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Errors were encountered while processing:
 /tmp/libnvinfer5_5.1.6-1+cuda10.0_arm64.deb

Hi,

There are plenty of dependencies among this.

It looks like the default container only with CUDA toolkit.
You will need to install cuDNN, TensorRT, TensorRT python package on your own.

Thanks.

Hello alvaro.a.calvo, have u solved this problem?

Hi,

I did find a solution that works for me. I am not able to post a detailed response right now, but basically you can mount the libraries using “plugin” method provided by the NVIDIA runtime. See the wiki below on how to create a plugin.

https://github.com/NVIDIA/libnvidia-container/blob/jetson/design/mount_plugins.md

I just ended up mounting openCV, and tensorRT python bindings, and building pyCuda inside the container. I’ll post all the steps once I’m back from vacation :)

OK, Thank u very much @alvaro.a.calvo.

@AastaLLL The following is my try, but I failed. :(

I copied all the deb file downloaded by the SDKManager, the following is my Installation attempt and output:

Step 11/11 : RUN dpkg -i cuda-repo-l4t-10-0-local-10.0.326_1.0-1_arm64.deb &&     dpkg -i libcudnn7_7.5.0.56-1+cuda10.0_arm64.deb && dpkg -i libcudnn7-dev_7.5.0.56-1+cuda10.0_arm64.deb && dpkg -i libcudnn7-doc_7.5.0.56-1+cuda10.0_arm64.deb &&     dpkg -i libnvinfer5_5.1.6-1+cuda10.0_arm64.deb && dpkg -i libnvinfer-dev_5.1.6-1+cuda10.0_arm64.deb && dpkg -i libnvinfer-samples_5.1.6-1+cuda10.0_all.deb
 ---> Running in 0ba9e800ed72
Selecting previously unselected package cuda-repo-l4t-10-0-local-10.0.326.
(Reading database ... 53836 files and directories currently installed.)
Preparing to unpack cuda-repo-l4t-10-0-local-10.0.326_1.0-1_arm64.deb ...
Unpacking cuda-repo-l4t-10-0-local-10.0.326 (1.0-1) ...
Setting up cuda-repo-l4t-10-0-local-10.0.326 (1.0-1) ...

The public CUDA GPG key does not appear to be installed.
To install the key, run this command:
sudo apt-key add /var/cuda-repo-10-0-local-10.0.326/7fa2af80.pub

Selecting previously unselected package libcudnn7.
(Reading database ... 53890 files and directories currently installed.)
Preparing to unpack libcudnn7_7.5.0.56-1+cuda10.0_arm64.deb ...
Unpacking libcudnn7 (7.5.0.56-1+cuda10.0) ...
Setting up libcudnn7 (7.5.0.56-1+cuda10.0) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Selecting previously unselected package libcudnn7-dev.
(Reading database ... 53896 files and directories currently installed.)
Preparing to unpack libcudnn7-dev_7.5.0.56-1+cuda10.0_arm64.deb ...
Unpacking libcudnn7-dev (7.5.0.56-1+cuda10.0) ...
Setting up libcudnn7-dev (7.5.0.56-1+cuda10.0) ...
update-alternatives: using /usr/include/aarch64-linux-gnu/cudnn_v7.h to provide /usr/include/cudnn.h (libcudnn) in auto mode
Selecting previously unselected package libcudnn7-doc.
(Reading database ... 53902 files and directories currently installed.)
Preparing to unpack libcudnn7-doc_7.5.0.56-1+cuda10.0_arm64.deb ...
Unpacking libcudnn7-doc (7.5.0.56-1+cuda10.0) ...
Setting up libcudnn7-doc (7.5.0.56-1+cuda10.0) ...
Selecting previously unselected package libnvinfer5.
(Reading database ... 53961 files and directories currently installed.)
Preparing to unpack libnvinfer5_5.1.6-1+cuda10.0_arm64.deb ...
Unpacking libnvinfer5 (5.1.6-1+cuda10.0) ...
dpkg: dependency problems prevent configuration of libnvinfer5:
 libnvinfer5 depends on cuda-cublas-10-0; however:
  Package cuda-cublas-10-0 is not installed.
 libnvinfer5 depends on cuda-cudart-10-0; however:
  Package cuda-cudart-10-0 is not installed.

dpkg: error processing package libnvinfer5 (--install):
 dependency problems - leaving unconfigured
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Errors were encountered while processing:
 libnvinfer5
The command '/bin/sh -c dpkg -i cuda-repo-l4t-10-0-local-10.0.326_1.0-1_arm64.deb &&     dpkg -i libcudnn7_7.5.0.56-1+cuda10.0_arm64.deb && dpkg -i libcudnn7-dev_7.5.0.56-1+cuda10.0_arm64.deb && dpkg -i libcudnn7-doc_7.5.0.56-1+cuda10.0_arm64.deb &&     dpkg -i libnvinfer5_5.1.6-1+cuda10.0_arm64.deb && dpkg -i libnvinfer-dev_5.1.6-1+cuda10.0_arm64.deb && dpkg -i libnvinfer-samples_5.1.6-1+cuda10.0_all.deb' returned a non-zero code: 1

it seems that it doesn’t work though I have installed the CUDA and cuDNN.
Look forward to your soonest reply @AastaLLL

I wish I had access to my computer to provide you a better description of steps I followed. I tried the same steps you mentioned above and I ran into the same problems, I would recommend you try mounting the libraries instead of installing them in the container. One of the great advantages of mounting the libraries in the container is disk space, these libraries are very heavy and the Jetson platform have limited space by default. This is why NVIDIA developers follow this approach for mounting CUDA libraries into the containers.

I started by finding the full name of the packages I need. For example:

dpkg -l | grep tensor

This should output a list of the to packages with the word “tensor”, in my case, I was looking for the tensorRT python bindings. Once I had the full name, you can list the files for that package.

dpkg -L <packagename>

Also navigate to the following directory:

/etc/nvidia-container-runtime/host-files-for-container.d/

Here you can see the libraries that are already mounted into the NVIDIA docker runtime by using csv files. Try to create a csv file following the format from the other csv files in that directory.

@alvaro.a.calvo
Thank you very much. Your answer helped me understand how runtime = NVIDIA works, which has been bothering me for some time.

I followed your guide, and see the tensorrt.csv file below:

lib, /usr/lib/aarch64-linux-gnu/libnvinfer.so.5.1.6
lib, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.1.6
lib, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.0.1.0
lib, /usr/lib/aarch64-linux-gnu/libnvonnxparser_runtime.so.0.1.0
lib, /usr/lib/aarch64-linux-gnu/libnvparsers.so.5.1.6
sym, /usr/lib/aarch64-linux-gnu/libnvcaffe_parser.so.5
sym, /usr/lib/aarch64-linux-gnu/libnvcaffe_parser.so.5.1.6
sym, /usr/lib/aarch64-linux-gnu/libnvinfer.so.5
sym, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5
sym, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.0
sym, /usr/lib/aarch64-linux-gnu/libnvonnxparser_runtime.so.0
sym, /usr/lib/aarch64-linux-gnu/libnvparsers.so.5
dir, /usr/src/tensorrt

I validated in the image that these dynamic link libraries and folders do exist.
I want to answer the question that what should I do if I want to import tensorrt in the python3.
Just add a csv file with following?

dir, /usr/lib/python3.6/dist-packages/tensorrt

If I remember correctly that should work, but I don’t have my computer to verify it right now. That should be enough to be able to call tensorRT from python inside the container. I also installed numpy and pyCuda inside the container during build, but you could try mounting them as well. If you run into any errors during build, you can try making the NVIDIA runtime the default, which to my understanding will enable it during build.

Ok, I have solved this problem by adding the csv file. Thank u a lot.

That makes it quite a lot of links to add for, say, opencv.

For instance, if I look at opencv (dpkg -L libopencv), I get an output that starts with :

/usr
/usr/lib
/usr/lib/aarch64-linux-gnu
/usr/lib/aarch64-linux-gnu/libopencv_calib3d.so.4.1.1
/usr/lib/aarch64-linux-gnu/libopencv_core.so.4.1.1
/usr/lib/aarch64-linux-gnu/libopencv_dnn.so.4.1.1
/usr/lib/aarch64-linux-gnu/libopencv_features2d.so.4.1.1
/usr/lib/aarch64-linux-gnu/libopencv_flann.so.4.1.1
etc.

I guess I manually have to look through all of them, and mark them as sym, lib or dir? I certainly don’t want to mount the whole /usr…

Do you have the actual opencv.csv file you can share, @alvaro.a.calvo? This is exactly what I need to build. That + tensorflow but I’ve got this one covered already (by installing directly in the container however).

Thanks!

Hi omartin,

Please open a new topic for your issue. Thanks

Hi! Can you provide me about this way because i have same problem with empty cublas and other like this but its important way for me, because i want to install some libs dependet from cuda and cudnn. I want docker image for in like base but with cuda, cudnn, cublas and others.

P.S. i’m write about it new topic

Hi, @alvaro.a.calvo, I’ve been following the posts on this. I’m wondering, before jumping into a hunt for the correct files to include into the opencv.csv, if you have such a file that I could use to mount the pre-installed version on opencv present in Jetpack 4.3, into my docker image. @mdegans has made available instructions on how to make a full version of opencv (which is running and building as I write this) with propietary algorithms and cuda support, but for my current use I could very much use the stock version as I’m not needing any of that at least for now. If I could just bind the provided opencv into the docker image (as is the case for a lot of other sutff like cuda, tensorrt, etc) I would be more than pleased for the time being, and it would be faster and lighter image than compiling opencv from sources. In the long run I’d rather have everything inside the image and not a single mount point, but since those mount points will be there for some time, I thought maybe I could also make use of the shortcut.

Thanks.
Best regards.
Eduardo

@ How can i install TensorRT in l4t-base image, there is not tensorrt for arm64

Could you post the .csv, please? I’ve been struggling a lot with this.

@2220160244 TensorRT is already in l4t-base image, you need to run l4t-base with --runtime nvidia and then TensorRT libs/headers are mounted into the container

@emiliochambu this is how I install the version of OpenCV from JetPack into container: https://github.com/dusty-nv/jetson-containers/blob/78148a41dba2718159dfa7179faa535eda03d0cc/Dockerfile.ml#L82

This worked for me:

dir, /usr/lib/python3.6/dist-packages/cv2

lib, /usr/lib/aarch64-linux-gnu/libopencv_calib3d.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_core.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_dnn.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_features2d.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_flann.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_gapi.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_highgui.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_imgcodecs.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_imgproc.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_ml.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_objdetect.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_photo.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_stitching.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_video.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libopencv_videoio.so.4.1.1
lib, /usr/lib/aarch64-linux-gnu/libtbb.so.2
lib, /usr/lib/aarch64-linux-gnu/libgtk-x11-2.0.so.0.2400.32

sys, /usr/lib/aarch64-linux-gnu/libopencv_calib3d.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_videoio.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_core.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_dnn.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_features2d.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_flann.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_gapi.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_highgui.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_imgcodecs.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_imgproc.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_ml.so
sys, /usr/lib/aarch64-linux-gnu/ibopencv_objdetect.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_photo.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_stitching.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_video.so
sys, /usr/lib/aarch64-linux-gnu/libopencv_videoio.so
sys, /usr/lib/aarch64-linux-gnu/libtbb.so
sys, /usr/lib/aarch64-linux-gnu/libgtk-x11-2.0.so.0

dev, /dev/nvhost-nvdec1
dev, /dev/nvhost-ctrl-nvdla0
dev, /dev/nvhost-ctrl-nvdla1
dev, /dev/nvhost-nvdla0
dev, /dev/nvhost-nvdla1
dev, /dev//nvidiactl
dev, /dev/nvhost-nvenc1

1 Like