CUDA cloud service?

Hi everyone. I have some CUDA code I need to urgently improve the performance of. Does anyone know of any service I can use to run my CUDA program (emulation isn’t sufficient) or willing to set 1 up for me briefly (e.g. 2 weeks). Preferably, I want to use a compute capability 1.2 device. I see there’s the Hoopoe cloud service, but it’s not ready yet.

I have access to Tesla 1060 at work, but they don’t have any laptops with NVIDIA GPUs to loan. I’ve ordered parts to build a desktop, but I ordered the just released Gigabyte H57m-USB3 (definitely want usb3), which won’t arrive until after 2/5.

Thanks for any help.

Never used them, but Penguin Computing launched such a service.

It sounds like Penguin Computing is not quite as low commitment as the other cloud computing services.

There is also ZetaExpress that was mentioned in another thread on this:

Prices seem kind of steep, but lower than the $5k setup mentioned for Penguin in this thread:

$25/Month - that’s exactly what I need. Great, I’ll try them.

You could also use: might be cheaper…

they are the ones that put JCUDA as well :)


Update. For $25 (basic service), you only get 900s GPU time and 7200s CPU time. They use a crude method of measurnig GPU time. You have to call ze_lock ze_unlock from the shell before launching your program, which will include CPU time as well. I’ll have to reduce my run times to < 1s so I can get about 900 executions. Still, a pretty good deal.

Could you clarify a bit? 900 seconds GPU time for 25$ ? thats a 15 minutes run for 25$??? or is it 900 seconds per day for a whole month

for 25$???



Please be sure to report back with a review of the service! I am curious to know how it works in practice.

900s / month for $25, and $0.02/s for overtime. At 1st, I thought this was expensive, but I only need it for testing purposes, and the GPU time is only when you’re program is running (wall clock time for entire program). Look here to see how they


I’d better make sure I have no infinite loops and not press ctrl-C unless I do have an inifinite loop and need to break and release the GPU manually.

This billing scheme strikes me as kind of ridiculous. Are you competing with multiple users on the same box for GPU time? In that case, I could see these API calls might be a sort of GPU mutual exclusion hack combined with a logging mechanism.

I dont understand it… you get 15 minutes of GPU time over a period of month for 25$???

Send me 25$ and I’ll run your code for an hour every day… I think this is rediculus… :">


I think there is a need to form a world wide GPU grid using developer machines around the globe… Hmm… I think technology is yet to catch up for it…

OK, after using the service for slightly more than a week, here’s my evaluation:

I got my account created ~day after I subscribed. It seems the machines and support staff are located in Singapore. I was assigned to and was told the machine’s only lightly loaded, so they will count neither my GPU or CPU usage. I still needed to use ze lock and ze release to get a GPU. I wrapped my executable in a shell program that catches ctrl-C so it always calls ze release.

I believe the OS is 64bit RedHat enterprise. The libraries are a bit old. It’s still using libc5 and barely has any development libraries/includes installed. I spent about an hour to copy all the image libs I need and recompiling some due to missing functions in libc5. Also scp & rsync were broken (rsync uses scp), which I notified them about, but rather than fixing, I came up with a clever solution:

ssh “cat > a.7z” < a.7z

After I got everything to compile, my program runs exactly as expected. The wall clock time is the same as I got on Windows Server to 2 figures. Infinite GPU loops can be aborted with ctrl-C, which I can’t figure out how to do on Vista. Overall, the Teslas worked solidly for me and I don’t know what else to say. I didn’t run anything lengthy, so I can’t say anything about stability, but Tesla is designed for high reliability.

If there’s anything else you want to know, feel free to ask.

Hey! doesn’t alive anymore. Do you know what happend with them?

This is quite an old thread - in case you missed it, Amazon now have Tesla-based clusters in their EC2 cloud:

I don’t, but Amazon now provides nodes with 2 x Tesla “Fermi” M2050 GPUs in their cloud.