Official Docker image for CUDA?

Hello CUDA devs,

I wanted to ask if the community would be interested in creating an official docker [1] image for CUDA on Docker Hub? I’ve been using CUDA with docker for a while now for computer vision, such as Caffe, and the fragmentation of projects using CUDA and docker has become difficult to work with [2]. I’d like to propose we create a pull request to the list of official images [3] and create an official repository docker users can share and rally around.

I’ve submitted PRs for official docker images before, but I wouldn’t yet consider myself a CUDA pro just yet. So I’m hoping so of you seasoned vets could help contribute and have a say in how it should be structured.

Thanks,
ruffsl

[1] https://registry.hub.docker.com/search?q=library
[2] https://registry.hub.docker.com/search?q=cuda
[3] https://github.com/docker-library/official-images

I’ve sent out a invite for discussion just after posting this, and I hope those parties will join in soon, but in the meantime I thought I’d outline some important topics to bump up the thread:

  • Licence: What exact licence does this fall under. It looks like there separate licence agreements for the tool kits, samples and drivers. We need to make sure the licensing is clearly specified and cited, and also compatible with this endeavor.
  • Base Image: What offical base image should be used for building the image? My bias would lean towards an LTS of Ubuntu style for it popular familiarity and support. But I know it's not the leanest of base images, and I have seen users build from alternatives such as Debian or CoreOS releases.
  • Tags: What tagging convention should be adopted. My first thought would be just, e.g. `cuda:6.5` and `cuda:7.0` etc. Should source for examples also be included for quick use setup verification?
  • Drivers: Should the distribution of CUDA version include the drivers, and what versions? A regular task in using CUDA with docker is that the same drivers on the host must match that of inside the container. Is there a way around this? Should this be left to the user to configure?
  • Maintenance: Submitting an official repo should not be taken likely, as some commitment it necessary to make sure releases are updated and maintained. My hope would be that the CUDA community would adopt this and be active enough to use and support the official image, and if Nvidia themselves would like to contribute, even better.

ruffsl

This sounds like a great idea ruffsl!!

Have you managed to make any progress so far. I would certainly be interested in making this a thing, it would be super useful.

Happy to help out any way I can :-)

Hey inJeans,

I’m back in school, so I’ve haven’t had a chance to put more into this. I was hoping on getting some other I know using Docker and Cuda involved in here, but they’ve each only emailed me back directly:

Sweet. Great to hear you are making progress, albeit not with the official channels.

I certainly agree that the driver problem will be the biggest issue. I wonder if the best solution would be to have the toolkit and drivers in separate containers and have users link the relevant containers together as needed?

Anyway I am happy to provide any help if you eventually make any progress.

Does anyone know who I should be emailing to make nVIDIA consider the issue?

We’ve been busy! We just published a new GitHub repo providing Docker images for CUDA:


We would love to get some feedback from your guys, we apologize that there was no response on this thread, I was not aware of it. We don’t have an official image on the Docker Hub yet, we would like to get more testing first :)

We solved the driver mismatch problem by providing a wrapper script for docker that finds the driver files and mount them as volumes inside the container when it is started.

Sweet. So i just created an automated build in a (private) DockerHub repo and integrated it into an existing docker image I have for a current project.

I am not actually making use of any gpu hardware, but I use the docker image to run tests on travis-ci.

So far it seems to be working well. I don’t do anything too advanced, but it works none the less.

Thanks

Wow @flx42, this is awesome, its great to see some official work on this! I’ll cite this github repo for some other discussions I want to start with the Tensorflow project. Also that nvidia-docker script is just like the docker script I’ve been hacking together to bootstrap cuda containers for a while, its neat to see a more polished approach.

Cross link to Tensorflow relevant discussion: