Does nvidia-docker install cuda and nvidia-driver in docker?

Hi everyone,

I have installed docker and nvidia-docker v1 regarding the nvidia-docker guide on github. Guide says that, after installation, usage should be like this:

nvidia-docker run --rm nvidia/cuda nvidia-smi

what is that command doing? is it installing the latest version of cuda and nvidia driver? Because my OS has nvidia driver installed already. Also, because I will use DIGITS for the jetson/redtail project, I need cuda 8.0. By the way my system has cuda 9.0. Could you please explain this issue?

Thanks in advance, Ender.

nvidia-docker by itself installs the driver and some related stuff into whatever container you launch
it does not install the full CUDA toolkit

However your command is pulling a container image from dockerhub that has the CUDA toolkit installed:

https://hub.docker.com/r/nvidia/cuda/

and running the nvidia-smi command in that container.

If you are using containers, you might be interested in NGC:

https://docs.nvidia.com/ngc/ngc-user-guide/index.html

thanks for the explanation. I will dig into these topics. But I have an another question before going. If I uninstall Cuda 9.0 that my system has and reinstall an older version e.g cuda 8.0, will nvidia-docker install all related stuff into container, such as cuda 8.0. I am trying to say that does nvidia-docker pull what I have installed on my host system? In my case, nvidia-driver and cuda-x.y

nvidia-driver: yes
cuda-x.y: no

If it pulls the driver from the host system, how can you upgrade the driver within docker?

I am new to docker. For a starter, which base-image I should be using for deploying a precompiled cuda binary (cuda 8/9 with cudart statically linked)? nvidia/driver or nvidia/cuda-x.x ? or just ubuntu?

I suppose if one wants to dynamically build the binary from source code, one would need nvidia/cuda because the needs for nvcc. However, for deploying a precompiled executable with static cudart, I am wondering if I should just need the driver? or nothing and rely on the driver installed on the host?

I found that nvidia/cuda-9.0 base image is 170MB, nvidia/driver:396.37-ubuntu16.04 is 750MB. Is it true that if I derive my image from nvidia/driver, a user may run my program without needing a driver locally? what the recommended container setup for deploying a cuda application?