NVIDIA CUDA in custom docker image

I have a Dockerhost with nvidia-docker2 on it (https://github.com/NVIDIA/nvidia-docker)

I have successfully managed to develop, compile, and run a CUDA application inside a Docker container by using NVIDIAs devel docker container (https://hub.docker.com/r/nvidia/cuda/). By using nvidia/cuda-devel as a base image, I was able to go into a running container and perform these tasks.

However, then I tried doing this from my own base image (not using nvidia/cuda), by installing CUDA as part of my dockerfile, logging into the container, then trying to compile my CUDA application. However, in this docker container, I cannot even run nvidia-SMI and get Failed to initialize NVML: Unknown Error.

Is the nvidia/cuda-devel base image doing something fancy to interact with the host, or some other critical step I am missing, other than just installing CUDA in the docker container? In their actual Dockerfile, I don’t see anything fancy going on: https://gitlab.com/nvidia/cuda/blob/ubuntu18.04/10.0/devel/Dockerfile

Hi @tommyjcarpenter, did you have any progress? We’re trying to do the same thing.
Thanks,
Yardena