I donât think that image can work unless Nvidia rebuilds it. my understanding of nvidia docker on Tegra is this kind of hacky. It bind mounts a bunch of stuff from the host, so things need to always be in sync. My understanding is it was done to reduce image size. I hope they find a better way. Itâs why Iâve been avoiding Nvidia docker on Tegra.
$ apt search cudnn
Sorting... Done
Full Text Search... Done
libcudnn7/stable,now 7.6.3.28-1+cuda10.0 arm64 [installed]
cuDNN runtime libraries
libcudnn7-dev/stable,now 7.6.3.28-1+cuda10.0 arm64 [installed]
cuDNN development libraries and headers
x86-64:
$ apt search cudnn
Sorting... Done
Full Text Search... Done
libcudnn7/unknown,now 7.6.5.32-1+cuda10.2 amd64 [installed,automatic]
cuDNN runtime libraries
libcudnn7-dev/unknown,now 7.6.5.32-1+cuda10.2 amd64 [installed]
cuDNN development libraries and headers
Thatâs a possible solution to make a unified Dockerfile for Tegra and x86.
Edit: you will probably have to install nvidiaâs apt sources and keys on a stock Ubuntu aarch64 image (or find an nvidia one with online apt repos already enabled). Unfortunately nvidia has no keyserver for Tegra so youâll have to grab it from the bsp tarball. This means no unified Dockerfile, i suppose, but it does mean fewer differences at least.
yes, I caninstall libcudnn7 on a generic arm64 ubuntu within docker container at Jetson:
However it wonât work for l4t container because of the existing binding with the system wide configuration, probably:
unable to make backup link of './usr/lib/aarch64-linux-gnu/libcudnn.so.7.6.3' before installing new version: Invalid cross-device link dmesg: read kernel buffer failed: Operation not permitted
Yes. The L4T image canât work because of how itâs designed, however it mightedit: it does (reading comprehension fail) if you start from Ubuntu aarch64.
FROM ubuntu:bionic
... build args and stuff ...
RUN download L4T bsp \
&& extract \
&& copy apt key as asc and use `apt-key` \
&& add apt sources (this is board dependent unfortunately, so a build argument may be what you want) \
&& delete tarball and extracted files
RUN apt-get update && apt-get install -y --no-install-recommends \
all \
your \
runtime \
deps \
including \
tensorrt
RUN apt-get update && apt-get install -y --no-install-recommends \
all-dev \
build-dev \
deps-dev \
&& get your app source \
&& build source \
&& make install \
&& delete source \
&& apt-get purge -y --autoremove \
all-dev \
build-dev \
deps-dev
USER appuser:appuser (create it above somewhere)
ENTRYPOINT [ "yourapp", "--someflag" ]
Is the basic idea. I have no idea if it will work but I if you install all of the things inside the image It could possibly avoid the bind mount related issues.
I am unsure of how and if you can turn that off with nvidia docker on Tegra. I havenât used it so IDK if the -v options are passed in a wrapper script or an alias or what. Youâll have to investigate and modify accordingly⊠Or wait for Nvidia to modify the design so it works as on x86, which is frankly great.
Ideally, IMO, a Dockerfile that works on x86-64 nvidia-docker should also work on Tegra, and if it doesnât, nvidia-docker needs fixing (and potentially apt repos and a lot of other things).
I also am getting the same issue with installation of opencv 4.3 on Xavier; It locates that cudnn is installed, but it would not recognise it to be a fit fro opencv installation
(Reading database ... 194764 files and directories currently installed.)
Preparing to unpack libcudnn8-dev_8.0.0.145-1+cuda10.2_arm64.deb ...
update-alternatives: removing manually selected alternative - switching libcudnn to auto mode
Unpacking libcudnn8-dev (8.0.0.145-1+cuda10.2) over (8.0.0.145-1+cuda10.2) ...
Setting up libcudnn8-dev (8.0.0.145-1+cuda10.2) ...
update-alternatives: using /usr/include/aarch64-linux-gnu/cudnn_v8.h to provide /usr/include/cudnn.h (libcudnn) in auto mode
nvidia@linux:~/Downloads/deb$ sudo ldconfig
nvidia@linux:~/Downloads/deb$ cd ..
nvidia@linux:~/Downloads$ cd ..
nvidia@linux:~$ cd opencv-4.3.0/
nvidia@linux:~/opencv-4.3.0$ cd build/
nvidia@linux:~/opencv-4.3.0/build$ cmake -D WITH_CUDA=ON -D WITH_CUDNN=ON -D OPENCV_DNN_CUDA=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -DWITH_CUBLAS=1 -D CUDA_ARCH_BIN="7.2" -D CUDA_ARCH_PTX="" -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=ON -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.3.0/modules -D BUILD_opencv_python3=yes -D PYTHON_LIBRARY=/usr/lib/python3.6/config-3.6m-aarch64-linux-gnu/libpython3.6m.so -D BUILD_opencv_cudacodec=OFF -D OPENCV_GENERATE_PKGCONFIG=ON ..
-- Detected processor: aarch64
-- Looking for ccache - not found
-- Found ZLIB: /usr/lib/aarch64-linux-gnu/libz.so (found suitable version "1.2.11", minimum required is "1.2.3")
-- Could NOT find OpenJPEG (minimal suitable version: 2.0, recommended version >= 2.3.1)
-- Could NOT find Jasper (missing: JASPER_LIBRARIES JASPER_INCLUDE_DIR)
-- Found ZLIB: /usr/lib/aarch64-linux-gnu/libz.so (found version "1.2.11")
-**- Could NOT find CUDNN: Found unsuitable version "..", but required is at least "7.5" (found /usr/lib/aarch64-linux-gnu/libcudnn.so)**
-- CUDA detected: 10.2
-- CUDA NVCC target flags: -gencode;arch=compute_72,code=sm_72;-D_FORCE_INLINES
-- Could not find OpenBLAS include. Turning OpenBLAS_FOUND off
-- Could not find OpenBLAS lib. Turning OpenBLAS_FOUND off
-- Could NOT find Atlas (missing: Atlas_CLAPACK_INCLUDE_DIR)
If you check my Docker Hub page, I have working images for OpenCV 4.2.0 and 4.3.0, however they canât work with the latest jetpack for now because of the way nvidia Docker works on Tegra (bind mounting libs).
For the moment, 4.3.0 wonât build with cudnn8. I expect to OpenCV maintainers will fix it shortly. Itâs a âwonât fixâ issue on my build script currently. If you care about using cudnn and OpenCV, you may have to downgrade the package if thatâs at all possible. Normally you could pin it, but since there are different apt sources⊠:-\
Thatâs a good suggestion @Honey_Patouceul. I added it to the issue. If anybody tries it, please let me know if it works or not. RN I need my machines on 4.3 for some extended testing.
Hello,
Work perfectly ! But i have another errorâŠ
Starting cmake
-- Detected processor: aarch64
-- Looking for ccache - found (/usr/bin/ccache)
-- Found ZLIB: /usr/lib/aarch64-linux-gnu/libz.so (found suitable version "1.2.11", minimum required is "1.2.3")
-- Found OpenJPEG: openjp2 (found version "2.3.1")
-- Found ZLIB: /usr/lib/aarch64-linux-gnu/libz.so (found version "1.2.11")
-- Found TBB (env): /usr/lib/aarch64-linux-gnu/libtbb.so
**CMake Error at cmake/FindCUDNN.cmake:68 (file): file failed to open for reading (No such file or directory): /usr/lib/aarch64-linux-gnu/cudnn.h**
**Call Stack (most recent call first):**
** cmake/OpenCVUtils.cmake:131 (find_package)**
** cmake/OpenCVDetectCUDA.cmake:42 (find_host_package)**
** cmake/OpenCVFindLibsPerf.cmake:43 (include)**
** CMakeLists.txt:687 (include)**
-- Found CUDNN: /usr/lib/aarch64-linux-gnu/libcudnn.so (found suitable version "8.0", minimum required is "7.5")
-- CUDA detected: 10.2
Starting cmake
-- Detected processor: aarch64
-- Looking for ccache - found (/usr/bin/ccache)
-- Found ZLIB: /usr/lib/aarch64-linux-gnu/libz.so (found suitable version "1.2.11", minimum required is "1.2.3")
-- Found OpenJPEG: openjp2 (found version "2.3.1")
-- Found ZLIB: /usr/lib/aarch64-linux-gnu/libz.so (found version "1.2.11")
-- Found TBB (env): /usr/lib/aarch64-linux-gnu/libtbb.so
-- Found CUDNN: /usr/lib/aarch64-linux-gnu/libcudnn.so (found suitable version "8.0", minimum required is "7.5")
-- CUDA detected: 10.2
-- CUDA NVCC target flags: -gencode;arch=compute_72,code=sm_72;-D_FORCE_INLINES
If youâve never done one before, thatâs fine too. In this case Iâll probably add it later today or on Monday to the development branch, and post a note back here giving you and @Honey_Patouceul credit.
If youâre using JetPack 4.4, youâll probably have to wait for a new build, but JetPack 4.3 works with OpenCV 4.2 and 4.3. I will launch a new build tomorrow and push it to Docker hub, but Iâm not sure what tags Iâll use for JetPack 4.4.
About the architecture issue and docker hub, Iâm not sure why the metadata is wrong for that particular image. It may be the base image. It certainly something worth investigating, but is aarch64 for sure and it does work.
Re Argus: I think you just need to bind mount the Argus socket, itâs in /tmp. IIRC, --runtime nvidia does this for you.