----PS: the CUDA install in other path, not in the /usr/local. And I use soft link the files to the path of /usr/local/cuda-10.2.
I create a container by the command:
docker run -it --runtime nvidia -v $PWD:/home/env --name test nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples /bin/bash
then I enter into the container, test the deepstream:
deepstream-app -c conf/deepstream_app_config.txt
and I found the error:
error while loading shared libraries: libcudart.so.10.2: cannot open shared object file: No such file or directory
I’m not clear:
whether the soft link of CUDA mentioned above cause this error??
whether CUDA should be installed in the basic system if running the deepstream images of samples revision? or just the driver should be installed?
I just test the docker images in Tesla T4 with no cuda installed in the basic system but just the driver installed. It is successfull.
If shared libraries should be provided in the basic system, It means the CUDA should be installed??
Hope anyone may give me some help. Thanks!
I pull the docker images of samples revision for x86 and aarch64,and find there are difference between these two revision. Here are the snapshot as below:
You may see the difference bewteen deepstream images of samples revision for dGPU and l4t from my snapshot which I create the container by same command.
I’m not clear why there is such difference?
Does it mean we need to create the container in L4T system with cuda link from basic system? —It seems no need to use cuda libraries from basic system for x86 because the docker images have contain such libraries?
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks