CUDA Online Shell

Hello everyone, Sorry for duplicates, I didn’t find anything about it.

I would like to know if NVIDIA provides any CUDA Online Shell, where I can code myself and run the codes.

Something like this existing C++ Shell: http://cpp.sh/

My main goal will be teach CUDA Programming for some students and I’m pretty sure that them couldn’t buy a GPU so easily.

Thanks,

My Best.

This may be a stupid response, but couldn’t you just buy a GPU, install it in your desktop and download CUDA 8 and an IDE and do this on your side?

I mean you might even have a GPU in your computer now, why not just use that?

I get it that these Online Shells exist to make your life easier when you don’t want to download a compiler, but GPUs are more expensive and power hungry, making it less likely for people to have them installed on a server for you to access.

You don’t even need an expensive GPU to run basic CUDA applications.

Hello guy, thanks for the response.

My main goal will be teach CUDA Programming for some students and I’m pretty sure that them couldn’t buy a GPU so easily.

My Best.

I wish there was something like this for CUDA. Let me know if you find it lol.

Edit:

My dream is to someday see Compiler Explorer support. That’d be nice. I’ve found that more often than not, with online shells I just wanna see if they’ll compile.

1 Like

Yeah, I don’t think there exists anything online that can do this.

Maybe talk to one of the server farm companies and see if they can provide a service like this?
NIMBIX/JARVICE provides a service similar to what you are looking for. They have servers with CPUs/FPGAs/GPUs installed that you can access and work on (for a price obviously) but you might be able to convince them that there are enough people looking for a CUDA platform on their cloud service that they might create one. That would solve your issue but would cost money upfront obviously, where it might end up being cheaper to just buy GPUs, but ease of access seems to be what you are looking for.

While this forum is NVIDIA/CUDA based, a wise option might be looking into teaching the class OpenCL instead? The two languages are relatively similar with the fact that OpenCL can run on CPUs as well as GPUs. This makes it more accessible for your class as I’m assuming everyone has a laptop or something similar to program with. This way you can still teach the basics, similar to CUDA, but have something to actually program in/with.

Pricing for AWS GPU instances are really not bad, about $0.25 an hour for a machine with effectively two GTX 760 level GPUs. You can indeed log into them, send over your files, compile, and run.

Amazon EC2 Spot Instances Pricing (Look for US west coast, maybe others have GPUs too)

This is not as cost effective for long running compute as simply buying your own (say) GTX 1060 GPU, but it could work well for a class with dozens (or hundreds) of students who might put in say 100 hours of time during a class to learn CUDA, costing only about $25.

Most online CUDA classes use AWS GPU instances, which are not hard to setup.

I may be very biased, but I do recommend CUDA over OpenCL for a number of reasons;

  1. 2/3 of GPU-related academic papers use CUDA over OpenCL
  2. Most GPU based and hybrid CPU/GPU supercomputers use NVIDIA GPUs usually with CUDA
  3. NVIDIA GPUs have the majority of marketshare of GPUs
  4. The software ecosystem for CUDA is much much better than that for OpenCL, and this particularly applies to AI, Machine Learning, Sparse/Dense Linear Algebra sub-routines, sorting and signal processing
  5. OpenCL has much more ‘boilerplate’ code related to the CPUs, GPUs or other processors available in a system, which makes the learning more difficult and the code more verbose
  6. IMO performance for the more basic algorithms (matrix multiplication, sorting, FFTs, image processing) tends to be better when implemented in CUDA over OpenCL
  7. NVIDIA tends to support CUDA development much more than AMD and the OpenCL people support OpenCL

In particular the US government has made a very large bet on CUDA over OpenCL (Department of Energy, Department of Defense) and companies such as Tesla, SpaceX and Facebook also use CUDA over OpenCL. This fact will definitely matter to students who want to get into HPC.

I don’t disagree with you, and would definitely say that CUDA > OpenCL. I was just stating that teaching OpenCL over CUDA might be more accessible.

My only quip is regarding HPC/SERVERS. FPGAs (Xilinx/Altera(Intel)) are making a (substantial) push into this market and Altera (now Intel) is making a push using CPUs paired with FPGAs. Now if you want the best performance you can get, you’re going to learn or hire someone who knows VHDL or Verilog, but to a beginner, programming in OpenCL is much preferred. FPGAs companies are making a push for OpenCL compatibility to make their devices more accessible. So while I will agree with you the CUDA is preferred (currently) and the incumbent, OpenCL may return when BW/watt becomes more important (which I think it will, unless GPU technology drastically changes and improves in this metric.)

So it really just means, CUDA might take over GPGPU but in general, OpenCL will do what it was intended to do, and dominate the market where there’s myriad programmable devices.

Yeah, hahaha.

I mean I know AMD is moving in the direction of HIP to convert CUDA into code that can be also executed on AMD GPUs (as well as other devices.)
Now, FPGA companies may end up learning that OpenCL is losing support and do something similar to AMD where they convert CUDA, or even maybe end up using HIP in some manner.

Honestly, I’m not exactly sure how this ends up playing out over the next few years, but I’m perfectly content learning CUDA and using that over OpenCL.
Unfortunately, my job requires that I know both, as we use GPUs and FPGAs for many of our applications.

Anyways, this is becoming off topic and therefore will be my last post regarding it.

I also want next semester to offer a course so called: Introduction to Parallel Programming via CUDA, and as mentioned I could well buy a NVIDIA card and install in my system, however the students would be tied to that computer … most likely locate at some university lab which is contrary to normality of nowadays of developing either very locally or over the internet. Again we could give some online access to our students and so on … but WOULD BE VERY VERY NICE IF THERE WAS SUCH POSSIBILITY OVER THE INTERNET.

This might be a bit late to the show, but there is a way to run cuda code using C++ via google colab, which I verified just now by following this guide:

Not the most pleasant in terms of user experience, but it can be an alternative, since an online shell for cuda is not available as of yet.