I felt confused with these two parameters when running a docker container.
For x86 severs, if I run the docker container with parameters of --gpus , I can use the command: nvidia-smi , to see the driver version. And no any addtional cuda files found in the container. But if I run the docker with parameters --runtime=nvidia, I can’t use the command of nvidia-smi, also see nothing of cuda files.
For Jetson series product. I run the docker container with parameters of --gpus or --runtime=nvidia, I can’t see any difference between these two containers. I’m not clear how to use such parameters in Jetson product?
Hope someone can help to clarify it, thanks in advance!
If I run the docker container with or without the parameter of runtime=nvidia, I do not find any difference for the cuda files between such two containers whatever at the platform of x86 or arm64.
But as the snap shown before, it seems some files would be mapping into the container when using the parameter of runtime=nvidia.
I also try to compare the parameter of gpus and runtime=nvidia, but I still not clear about it.
I also herein two test snapshoot on x86 platform and arm64 platform as below:
To put it simply, use --gpus on x86 and --runtime=nvidia on jetson/ARM SBSA system.
On Jetson, CUDA and some device nodes are shared with the host. You can view this file for more information. /etc/nvidia-container-runtime/host-files-for-container.d/l4t.csv
For newer Jetpack, please check /etc/nvidia-container-runtime/host-files-for-container.d/drivers.csv
In fact, more details need to refer to nvidia-container-toolkit, I don’t know much about it