NVIDIA Docker: GPU Server Application Deployment Made Easy

Hi Zak, I think that's a question for the TensorFlow team. Thanks!

Hi guys,

We have ubuntu terminal setup, our goal is to try running 3D simulation on server(with nvidia gpu) and expecting nvidia docker to stream 3D content to client(without gpu). Please note that 3D simulation is unreal executable that is runnable in linux OS. Can you please confirm if this is possible using nvidia GPU?

Tried one of the above example with following command:
"nvidia-docker run --name digits --rm -ti -p 8000:34448 nvidia/digits"
but getting following exception when building:
curl localhost:8000 -vv
* Rebuilt URL to: localhost:8000/
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8000 (#0)
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.47.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

Any help will be greatly appreciated. Thank you!

`nvidia/digits` has changed the port that it listens on.

For `nvidia/digits:4.0` and early, your command is correct. For `nvidia/digits:5.0` and later, you'll need to map to port 5000 inside the container.

Change your port mapping to `-p 8000:5000`.

Thank you very much for clarification!

Hi guys,

When trying to run nvidia docker nbody sample from ubuntu terminal on azure VM, it looks to be giving following result rather than graphical output. Wondering if it is possible to see nbody graphical simulation? Any help would be appreciated! Thank you.

nvidia-docker run --rm sample:nbody
Run "nbody -benchmark [-numbodies=<numbodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies=<n> (number of bodies (>= 1) to run in simulation)
-device=<d> (where d=0,1,2.... for the CUDA device to use)
-numdevices= (where i=(number of CUDA devices > 0) to use for simulation)
-compare (compares simulation results running once on the default GPU and once on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Tesla K80" with compute capability 3.7

> Compute 3.7 CUDA device: [Tesla K80]
13312 bodies, total time for 10 iterations: 32.774 ms
= 54.071 billion interactions per second
= 1081.413 single-precision GFLOP/s at 20 flops per interaction

I cant seem to listen to the right port. I am new to most of this but I do

I ssh into the aws server with:
ssh -i "l***.pem" -L 8000:***:8000 ubuntu@***

and then
nvidia-docker run --rm -ti -p 8000:5000 nvidia/digit

the application seems to run since DIGITS appears

but I get
channel 4: open failed: connect failed: Connection timed out

Where do you think things are going wrong? Any ideas

Thanks in advance

i am using nvidia tesla v100 machine running linux but want to change to windows 10
is nvidia docker available for windows 10?

Hi. It would be helpful if you included an example on how to use docker-compose as well.

seems these threads are old and didnt get answer, whats the point, expected much much more from a company like Nvidia

1 Like