Problem creating my devel container in jetson AGX container

Hello

I need for compiling tkdnn project:
cuda 10.2 - dev
cudnn 8.0.0 - dev
tensorrt 7.2. -dev

I need to do this inside a container, but… you didn’t upload previous devel for docker pull nvcr.io/nvidia/l4t-base:

My first approach in my 32.6 flashed agx jetson, was using containers which have self contained cuda packages(from docker pull nvcr.io/nvidia/l4t-cuda:10.2.460-runtime)

Then,inside the container , add repositories

ARCH=arm64 SOC=t194
https://developer.download.nvidia.com/compute/cuda/repos/${OS}/${ARCH}

… and install the next packages …

ENV NV_CUDNN_VERSION 8.2.4.15 ENV NV_CUDNN_PACKAGE_NAME libcudnn8
ENV NV_CUDNN_PACKAGE ${NV_CUDNN_PACKAGE_NAME}=${NV_CUDNN_VERSION}-1+cuda10.2
ENV NV_CUDNN_PACKAGE_DEV ${NV_CUDNN_PACKAGE_NAME}-dev=${NV_CUDNN_VERSION}-1+cuda10.2

…with…

apt install -y --no-install-recommends \
    ${NV_CUDNN_PACKAGE} \
    ${NV_CUDNN_PACKAGE_DEV} && \
    apt-mark hold ${NV_CUDNN_PACKAGE_NAME} && \

At this point I can compile opencv, so it’is fine.

But the tkdnn project i am trying to compile, requires tensorrt-dev, I try to satisfy it with:

apt install -y  libnvinfer7=${TRT_VERSION} \
                    libnvonnxparsers7=${TRT_VERSION} \
                    libnvparsers7=${TRT_VERSION} \
                    libnvonnxparsers-dev=${TRT_VERSION} \
                    libnvparsers-dev=${TRT_VERSION} \
                    libnvinfer-plugin7=${TRT_VERSION} \
                    libnvinfer-plugin-dev=${TRT_VERSION} \
                    libnvinfer-dev=${TRT_VERSION} python-libnvinfer=${TRT_VERSION} \
                    python3-libnvinfer=${TRT_VERSION} 
apt-mark hold libnvinfer7 libnvonnxparsers7 libnvparsers7 libnvinfer-plugin7 libnvinfer-dev libnvonnxparsers-dev libnvparsers-dev libnvinfer-plugin-dev python-libnvinfer python3-libnvinfer

Everything looks fine, but libraries are not well linked.

/usr/bin/ld: warning: libnvdla_compiler.so, needed by /usr/lib/aarch64-linux-gnu/libnvinfer.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libnvmedia.so, needed by /usr/lib/aarch64-linux-gnu/libnvinfer.so, not found (try using -rpath or -rpath-link)
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorDestroy'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaGetMaxOutstandingTasks'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorEglStreamConsumerDestroy'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorEglStreamConsumerAcquireMetaData'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaEglStreamProducerGetTensor'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaDataUnregister'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaGetOutputTensorDescriptor'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::destroyNetwork(nvdla::INetwork*)'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::createWisdom()'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorGetMetaData'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaEglStreamProducerPostTensor'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaAppendLoadable'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorGetStatus'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaDataRegister'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaLoadableCreate'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::destroyWisdom(nvdla::IWisdom*)'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaGetInputTensorDescriptor'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorEglStreamProducerCreate'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDeviceDestroy'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorCreate'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorEglStreamProducerDestroy'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorEglStreamConsumerCreate'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaInit'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorLock'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `nvdla::createNetwork()'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaSubmit'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaEglStreamConsumerAcquireTensor'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaLoadLoadable'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaCreate'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaEglStreamConsumerReleaseTensor'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaGetNumEngines'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaDestroy'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorEglStreamProducerPostMetaData'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDeviceCreate'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaDlaSetCurrentLoadable'
/usr/lib/aarch64-linux-gnu/libnvinfer.so: undefined reference to `NvMediaTensorUnlock'
collect2: error: ld returned 1 exit status

And I cannot find a devel ngc image for my need. Even taking into account that what I am trying to get is the r32.4 cuda-x versions.

Just in case it could be helpful, cmake -LA gives this output:

Cloning into 'tkDNN'...
CMake Warning (dev) at CMakeLists.txt:21:
  Syntax Warning in cmake code at column 30

  Argument not separated from preceding token by whitespace.
This warning is for project developers.  Use -Wno-dev to suppress it.

-- The C compiler identification is GNU 7.5.0
-- The CXX compiler identification is GNU 7.5.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda (found suitable version "10.2", minimum required is "9.0") 
-- Found CUDNN: /usr/lib/aarch64-linux-gnu/libcudnn.so
-- Found CUDNN include: /usr/include
-- Found NVINFER: /usr/lib/aarch64-linux-gnu/libnvinfer.so
-- Found NVINFER include: /usr/include/aarch64-linux-gnu
-- Found CUDNN: /usr/lib/aarch64-linux-gnu/libcudnn.so  
Eigen DIR: /usr/include/eigen3
-- Found OpenCV: /usr/local (found version "4.5.2") 
install dir:/usr/local
-- Configuring done
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_cublas_LIBRARY (ADVANCED)
    linked by target "tkDNN" in directory /tmp/install_tkdnn/tkDNN
    linked by target "kernels" in directory /tmp/install_tkdnn/tkDNN

-- Generating done
CMake Generate step failed.  Build files cannot be regenerated correctly.
-- Cache values
CMAKE_ADDR2LINE:FILEPATH=/usr/bin/addr2line
CMAKE_AR:FILEPATH=/usr/bin/ar
CMAKE_BUILD_TYPE:STRING=
CMAKE_COLOR_MAKEFILE:BOOL=ON
CMAKE_CXX_COMPILER:FILEPATH=/usr/bin/c++
CMAKE_CXX_COMPILER_AR:FILEPATH=/usr/bin/gcc-ar-7
CMAKE_CXX_COMPILER_RANLIB:FILEPATH=/usr/bin/gcc-ranlib-7
CMAKE_CXX_FLAGS:STRING=
CMAKE_CXX_FLAGS_DEBUG:STRING=-g
CMAKE_CXX_FLAGS_MINSIZEREL:STRING=-Os -DNDEBUG
CMAKE_CXX_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
CMAKE_CXX_FLAGS_RELWITHDEBINFO:STRING=-O2 -g -DNDEBUG
CMAKE_C_COMPILER:FILEPATH=/usr/bin/cc
CMAKE_C_COMPILER_AR:FILEPATH=/usr/bin/gcc-ar-7
CMAKE_C_COMPILER_RANLIB:FILEPATH=/usr/bin/gcc-ranlib-7
CMAKE_C_FLAGS:STRING=
CMAKE_C_FLAGS_DEBUG:STRING=-g
CMAKE_C_FLAGS_MINSIZEREL:STRING=-Os -DNDEBUG
CMAKE_C_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
CMAKE_C_FLAGS_RELWITHDEBINFO:STRING=-O2 -g -DNDEBUG
CMAKE_DLLTOOL:FILEPATH=CMAKE_DLLTOOL-NOTFOUND
CMAKE_EXE_LINKER_FLAGS:STRING=
CMAKE_EXE_LINKER_FLAGS_DEBUG:STRING=
CMAKE_EXE_LINKER_FLAGS_MINSIZEREL:STRING=
CMAKE_EXE_LINKER_FLAGS_RELEASE:STRING=
CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO:STRING=
CMAKE_EXPORT_COMPILE_COMMANDS:BOOL=
CMAKE_INSTALL_PREFIX:PATH=/usr/local
CMAKE_LINKER:FILEPATH=/usr/bin/ld
CMAKE_MAKE_PROGRAM:FILEPATH=/usr/bin/make
CMAKE_MODULE_LINKER_FLAGS:STRING=
CMAKE_MODULE_LINKER_FLAGS_DEBUG:STRING=
CMAKE_MODULE_LINKER_FLAGS_MINSIZEREL:STRING=
CMAKE_MODULE_LINKER_FLAGS_RELEASE:STRING=
CMAKE_MODULE_LINKER_FLAGS_RELWITHDEBINFO:STRING=
CMAKE_NM:FILEPATH=/usr/bin/nm
CMAKE_OBJCOPY:FILEPATH=/usr/bin/objcopy
CMAKE_OBJDUMP:FILEPATH=/usr/bin/objdump
CMAKE_RANLIB:FILEPATH=/usr/bin/ranlib
CMAKE_READELF:FILEPATH=/usr/bin/readelf
CMAKE_SHARED_LINKER_FLAGS:STRING=
CMAKE_SHARED_LINKER_FLAGS_DEBUG:STRING=
CMAKE_SHARED_LINKER_FLAGS_MINSIZEREL:STRING=
CMAKE_SHARED_LINKER_FLAGS_RELEASE:STRING=
CMAKE_SHARED_LINKER_FLAGS_RELWITHDEBINFO:STRING=
CMAKE_SKIP_INSTALL_RPATH:BOOL=NO
CMAKE_SKIP_RPATH:BOOL=NO
CMAKE_STATIC_LINKER_FLAGS:STRING=
CMAKE_STATIC_LINKER_FLAGS_DEBUG:STRING=
CMAKE_STATIC_LINKER_FLAGS_MINSIZEREL:STRING=
CMAKE_STATIC_LINKER_FLAGS_RELEASE:STRING=
CMAKE_STATIC_LINKER_FLAGS_RELWITHDEBINFO:STRING=
CMAKE_STRIP:FILEPATH=/usr/bin/strip
CMAKE_VERBOSE_MAKEFILE:BOOL=FALSE
CUDA_64_BIT_DEVICE_CODE:BOOL=ON
CUDA_ATTACH_VS_BUILD_RULE_TO_CUDA_FILE:BOOL=ON
CUDA_BUILD_CUBIN:BOOL=OFF
CUDA_BUILD_EMULATION:BOOL=OFF
CUDA_CUDART_LIBRARY:FILEPATH=/usr/local/cuda/lib64/libcudart.so
CUDA_CUDA_LIBRARY:FILEPATH=CUDA_CUDA_LIBRARY-NOTFOUND
CUDA_GENERATED_OUTPUT_DIR:PATH=
CUDA_HOST_COMPILATION_CPP:BOOL=ON
CUDA_HOST_COMPILER:FILEPATH=/usr/bin/cc
CUDA_NVCC_EXECUTABLE:FILEPATH=/usr/local/cuda/bin/nvcc
CUDA_NVCC_FLAGS:STRING=
CUDA_NVCC_FLAGS_DEBUG:STRING=
CUDA_NVCC_FLAGS_MINSIZEREL:STRING=
CUDA_NVCC_FLAGS_RELEASE:STRING=
CUDA_NVCC_FLAGS_RELWITHDEBINFO:STRING=
CUDA_OpenCL_LIBRARY:FILEPATH=CUDA_OpenCL_LIBRARY-NOTFOUND
CUDA_PROPAGATE_HOST_FLAGS:BOOL=ON
CUDA_SDK_ROOT_DIR:PATH=CUDA_SDK_ROOT_DIR-NOTFOUND
CUDA_SEPARABLE_COMPILATION:BOOL=OFF
CUDA_TOOLKIT_INCLUDE:PATH=/usr/local/cuda/include
CUDA_TOOLKIT_ROOT_DIR:PATH=/usr/local/cuda
CUDA_USE_STATIC_CUDA_RUNTIME:BOOL=ON
CUDA_VERBOSE_BUILD:BOOL=OFF
CUDA_VERSION:STRING=10.2
CUDA_cublas_LIBRARY:FILEPATH=CUDA_cublas_LIBRARY-NOTFOUND
CUDA_cudadevrt_LIBRARY:FILEPATH=/usr/local/cuda/lib64/libcudadevrt.a
CUDA_cudart_static_LIBRARY:FILEPATH=/usr/local/cuda/lib64/libcudart_static.a
CUDA_cufft_LIBRARY:FILEPATH=CUDA_cufft_LIBRARY-NOTFOUND
CUDA_cupti_LIBRARY:FILEPATH=CUDA_cupti_LIBRARY-NOTFOUND
CUDA_curand_LIBRARY:FILEPATH=CUDA_curand_LIBRARY-NOTFOUND
CUDA_cusolver_LIBRARY:FILEPATH=CUDA_cusolver_LIBRARY-NOTFOUND
CUDA_cusparse_LIBRARY:FILEPATH=CUDA_cusparse_LIBRARY-NOTFOUND
CUDA_nppc_LIBRARY:FILEPATH=CUDA_nppc_LIBRARY-NOTFOUND
CUDA_nppi_LIBRARY:FILEPATH=CUDA_nppi_LIBRARY-NOTFOUND
CUDA_nppial_LIBRARY:FILEPATH=CUDA_nppial_LIBRARY-NOTFOUND
CUDA_nppicc_LIBRARY:FILEPATH=CUDA_nppicc_LIBRARY-NOTFOUND
CUDA_nppicom_LIBRARY:FILEPATH=CUDA_nppicom_LIBRARY-NOTFOUND
CUDA_nppidei_LIBRARY:FILEPATH=CUDA_nppidei_LIBRARY-NOTFOUND
CUDA_nppif_LIBRARY:FILEPATH=CUDA_nppif_LIBRARY-NOTFOUND
CUDA_nppig_LIBRARY:FILEPATH=CUDA_nppig_LIBRARY-NOTFOUND
CUDA_nppim_LIBRARY:FILEPATH=CUDA_nppim_LIBRARY-NOTFOUND
CUDA_nppist_LIBRARY:FILEPATH=CUDA_nppist_LIBRARY-NOTFOUND
CUDA_nppisu_LIBRARY:FILEPATH=CUDA_nppisu_LIBRARY-NOTFOUND
CUDA_nppitc_LIBRARY:FILEPATH=CUDA_nppitc_LIBRARY-NOTFOUND
CUDA_npps_LIBRARY:FILEPATH=CUDA_npps_LIBRARY-NOTFOUND
CUDA_nvToolsExt_LIBRARY:FILEPATH=/usr/local/cuda/lib64/libnvToolsExt.so
CUDA_rt_LIBRARY:FILEPATH=/usr/lib/aarch64-linux-gnu/librt.so
CUDNN_INCLUDE_DIR:PATH=/usr/include
CUDNN_LIBRARY:FILEPATH=/usr/lib/aarch64-linux-gnu/libcudnn.so
Eigen3_DIR:PATH=/usr/lib/cmake/eigen3
NVINFER_INCLUDE_DIR:PATH=/usr/include/aarch64-linux-gnu
NVINFER_LIBRARY:FILEPATH=/usr/lib/aarch64-linux-gnu/libnvinfer.so
OpenCV_DIR:PATH=/usr/local/lib/cmake/opencv4
yaml-cpp_DIR:PATH=/usr/lib/aarch64-linux-gnu/cmake/yaml-cpp

So, my guess is tensorRT is missmatching with something, but is rare because i am using r32.4 version, also tried with r32.5 versions. (Just a bit of downgrading tensorrt according to jetpack archive website)

Hi @masip85, there is an l4t-tensorrt container that you can use if you need self-contained cuDNN/TensorRT runtimes. Alternatively, you can just use l4t-base which mounts CUDA/cuDNN/TensorRT (plus the dev headers) into the container at runtime with --runtime nvidia

If you are using l4t-base and you need CUDA/ect during docker build operations, then make the nvidia runtime the default like shown here: https://github.com/dusty-nv/jetson-containers#docker-default-runtime

Hello

I tried that and coulnd’t:

By the way, NVIDIA NGC doesn’t accomplish my tensor7 requirement. Why do you have only one version? there are 0 alternative versions of that container :(

Maybe my error was because I wasn’t using nvidia runtime as default? I can’t recall. But I wonder:
Is mandatory inhereit from host for compiling cuda&cudnn&trt projects inside the containers?

I ask this,because, we’ve got different nvidias with different jetson packs. Following your advice, the compilation inside the base container will behave different according to the different hosts with its different host cuda versions. right?

Please, @dusty_nv , could you reply at least my first two questions?

  • What do we do if we need previous tensorRT versions in NVIDIA NGC ?

  • Is mandatory inhereit from host for compiling cuda&cudnn&trt projects inside the containers?

Sorry for the delay - the l4t-tensorrt container was new with JetPack 4.6, and it’s the first version of that container, hence only TensorRT 8 container is available. Currently you should only run the version of CUDA/cuDNN/TensorRT that come with that particular version of JetPack, due to low-level dependencies in the L4T kernel and GPU driver. However in the future, we are moving towards the model of having CUDA/cuDNN/TensorRT installed inside the containers with the option of having different versions of those.

It’s not mandatory persay, and you can in theory create your own CUDA/cuDNN/TensorRT containers by installing the debian packages into them. If you need an older version of TensorRT, I would currently recommend using the older version of JetPack that ships with your desired version of TensorRT.