What's difference between --gpus and --runtime=nvidia for the docker container?

I felt confused with these two parameters when running a docker container.

  1. For x86 severs, if I run the docker container with parameters of --gpus , I can use the command: nvidia-smi , to see the driver version. And no any addtional cuda files found in the container. But if I run the docker with parameters --runtime=nvidia, I can’t use the command of nvidia-smi, also see nothing of cuda files.
  2. For Jetson series product. I run the docker container with parameters of --gpus or --runtime=nvidia, I can’t see any difference between these two containers. I’m not clear how to use such parameters in Jetson product?
    Hope someone can help to clarify it, thanks in advance!

Here I provided some snapshoot when running code in Jetson platform


What I confused are:

  1. If I run the docker container with or without the parameter of runtime=nvidia, I do not find any difference for the cuda files between such two containers whatever at the platform of x86 or arm64.
  2. But as the snap shown before, it seems some files would be mapping into the container when using the parameter of runtime=nvidia.
  3. I also try to compare the parameter of gpus and runtime=nvidia, but I still not clear about it.
    I also herein two test snapshoot on x86 platform and arm64 platform as below:

x86 platform result:

arm64 platform result:

To put it simply, use --gpus on x86 and --runtime=nvidia on jetson/ARM SBSA system.

On Jetson, CUDA and some device nodes are shared with the host. You can view this file for more information.
/etc/nvidia-container-runtime/host-files-for-container.d/l4t.csv

For newer Jetpack, please check /etc/nvidia-container-runtime/host-files-for-container.d/drivers.csv

In fact, more details need to refer to nvidia-container-toolkit, I don’t know much about it

1 Like

Thanks for your reply. I will check the detail first~

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.