NVIDIA Docker: GPU Server Application Deployment Made Easy

Originally published at: https://developer.nvidia.com/blog/nvidia-docker-gpu-server-application-deployment-made-easy/

Over the last few years there has been a dramatic rise in the use of containers for deploying data center applications at scale. The reason for this is simple: containers encapsulate an application’s dependencies to provide reproducible and reliable execution of applications and services without the overhead of a full virtual machine. If you have…

Can / will I be able to run this using docker for Windows?

1 Like


thanks for the blog post!

is it also possible to package docker applications using either (1) the NVENC/NVDEC drivers and/or (2) OpenGL/EGL parts of the NVIDIA driver? or does this only work with CUDA applications?



It should work for both cases, we export all the driver libraries that you need for those use cases too.
We have an experimental OpenGL branch on GitHub. Please open an issue if it doesn't work for you.

It's not possible right now, we would need GPU passthrough.

is this available for L4T (TK1 and TX1) ?

Currently: no. Docker doesn't officially support ARM right now.
You can find multiple tutorials about running Docker on ARM, but you will probably also need to modify nvidia-docker to make it work.

Hey, I'm one of the author of this blog article. I just wanted to point out this great tutorial contributed by a Kaggle user about how to use nvidia-docker to run deep learning apps on Amazon AWS GPU instances:

Guys, thanks so much for putting this together. Using this and the tutorial you posted in the comment below I was finally able to get a working setup on EC2 with Nvidia gpus. You rock!

Thanks Itai. Glad you are up and running!

Would linux docker containers running on windows support GPU as well?
Can you direct me to any tutorials if so?

It's not supported right now, you would need to do GPU passthrough with HyperV.

How can I set up a GPU pass through? Is it straightforward?

Hello when I run this command sudo nvidia-docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflo...
then I open localhost:8888 in my browser, I am asked to enter the login password in my browser. could you please help me to fix this issue? I am newbie in this field. I would be grateful to have detailed solution. https://uploads.disquscdn.c...

See my answer here: https://github.com/NVIDIA/n...

VMWare VMs have had the ability to use nvidia (well) on a Windows Host. What's the roadblock to making it possible in docker? I know that's an over simplified question, but having used nvidia in docker ( https://hub.docker.com/r/ra... ) for a Linux host and having used nvidia in VM's for a couple years, it seems odd this isn't in place. Doubly so when WIndows 10 is getting a fair bit of love from Docker in every other way.

See this issue: https://github.com/NVIDIA/n...
If you spawn a VM and setup GPU passthrough, you can then use Docker and nvidia-docker inside this VM without any issue. I've tested this successfully with KVM.

But, if you want to use docker for Windows (Hyper-V), you would need Discrete Device Assignment support from Docker and Hyper-V (only on Windows Server 2016). That's why we don't support it.

Very good. Thank you. Might be worth mentioning that the default being promoted by Docker (Hyper-V) is where things break down, but it is possible with other hypervisors.

I'm a bit fuzzy about how I use KVM (or other hypervisors?) to get around this though since I didn't setup Hyper-V explicitly. Are you suggesting I use any other hypervisor that can see the GPU, then setup from there as if that was my host; such as installing CentOS in a VMWare (or KVM) VM, then installing docker and doing my development inside that VM (losing my Windows environment), or is there a way to get Docker for WIndows to use that VM so I can continue to work in Windows, use Powershell, etc..?

Is there an article that explains this you can link me to?

If you want to use nvidia-docker inside a VM, you indeed need to treat this VM as a regular machine and install the distro, the NVIDIA drivers, docker and nvidia-docker.
There is currently no way (as far as I know), to use your Docker client from the host and spawn a GPU VM with Docker. But note that I don't have much experience with Docker on Windows.

I was able to get things working using latest-gpu. However, when I run tensorflow I get the CPU warnings saying that I could have faster execution if built with SSE instructions. How do I get this speed up? Evidently the latest-gpu does not have this.