Hi, I’m currently using nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples to build my own docker image. But I need to make my app during the build of this DockerFile with “RUN make” and it returns an error… Here is a simplified DockerFile (I know that some libs are missing) that gives the same error :
Hi, I did not run exactly the l4t deepstream to face this issue. I was using deepstream:5.1-21.02-triton when I encountered the same issue as you. When running make from docker build, it throws me the error upon reaching make portion. However, if I skip the make portion and only perform make when entering the docker container (via docker exec -it <> bash), it works.
Alternatively, I found out that the libcuda.so in deepstream:5.1-21.02-triton image had different versions of libcuda.so, but not the -devel image. Hence, I switched to using deepstream:5.1-21.02-devel and it works find for me.
The explanation to @Guitariout situation is due to the image expected GPU access via nvidia-container-runtime but the docker build is not using it. Therefore, the error appeared. However, when launching docker container via docker run --runtime==nvidia ..., the container has access to the GPU and able to make.
The culprit is due to /usr/lib/x86_64-linux-gnu/libcuda.so present in the image while using default docker runtime which was set to nvidia-container-runtime. First of all, ensure that the machine’s /etc/docker/daemon.json doesn’t have the default-runtime defined. Remove if it is present. Reload docker daemon and restart docker. To be safe, you can reboot your machine (I had to).
Rebuild your Dockerfile again and you should be good to go.
Note:
You may try and see the difference in the presence of /usr/local/cuda-11.1/lib64/libcuda.so in nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples and your image. To avoid the error you are facing, /usr/local/cuda-11.1/lib64/libcuda.so should not be present.