Linux Server setup suggestions for remote CUDA and Tensorflow.

Hi all,

I was using AWS for my ML but it was too expensive. My friend recently built a machine with a Geforce1070 which is left at his work.

What we’re hoping to achieve is to each have our own instance/VM that is sandboxed and we can ssh into and both access the GPU for tensorflow etc.


Doing some research I looked at openstack and eucalyptus - they seemed overly broad and complex. And my friend told me to checkout OpenVZ but I didn’t see a straightforward answer on GPU passthrough.

Can someone suggest the easiest way to accomplish what we’re trying to do? Or perhaps link to a simple guide?

It should be possible to use docker containers to do what you want.

Here is a useful writeup, with more or less step-by-step instructions:

[url]https://devblogs.nvidia.com/parallelforall/nvidia-docker-gpu-server-application-deployment-made-easy/[/url]

Due to the containerization, whatever you do should be “sandboxed” from whatever anybody else does on that machine.

I have used the docker image and nvidia-docker successfully however the binary on docker is not compiled with options to support SSE4, AVX, FMA, etc. Does anyone know how to make this happen with docker? I think it might be complicated by how bazel works. I have found nothing on this. I posted to stack exchange here:
[url]Compile Tensorflow from source with Docker to get CPU speed up - Stack Overflow