The way --runtime is implemented on Tegra breaks nearly everything. Since libraires are not included in the image itself, it’s
impossible difficult to build off them. There are stubs in
/usr/local/cuda/lib64/stub but most (all?) build scripts won’t find them.
Please, Nvidia, implement --runtime the way it’s implemented on x86 across all platforms and include multi-arch support in your images like is common on docker hub. That way I can just
FROM “cuda:latest” or “deepstream:latest” and it’ll just work.
I can pull “alpine” or “ubuntu” on x86 or aarch64 and everything is in the same place with identical package names. On the other hand, trying to build opencv with cuda support based off
nvcr.io/nvidia/l4t-base:r32.3.1, for example, fails with:
... CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: CUDA_cublas_LIBRARY (ADVANCED) linked by target "opencv_cudev" in directory /tmp/build_opencv/opencv_contrib/modules/cudev linked by target "opencv_test_cudev" in directory /tmp/build_opencv/opencv_contrib/modules/cudev/test linked by target "opencv_test_core" in directory /tmp/build_opencv/opencv/modules/core linked by target "opencv_perf_core" in directory /tmp/build_opencv/opencv/modules/core linked by target "opencv_core" in directory /tmp/build_opencv/opencv/modules/core linked by target "opencv_test_cudaarithm" in directory /tmp/build_opencv/opencv_contrib/modules/cudaarithm linked by target "opencv_cudaarithm" in directory /tmp/build_opencv/opencv_contrib/modules/cudaarithm ...
Edit: eventually I found this tip specifc to cmake, solving the issue, but this doesn’t seem to be required anymore on x86. I learned something new today about cmake (it was ignoring LD_LIBRARY_PATH). I would mark this as solved if I could. @TomK or admin feel free to do so.