This may be a stupid response, but couldn’t you just buy a GPU, install it in your desktop and download CUDA 8 and an IDE and do this on your side?
I mean you might even have a GPU in your computer now, why not just use that?
I get it that these Online Shells exist to make your life easier when you don’t want to download a compiler, but GPUs are more expensive and power hungry, making it less likely for people to have them installed on a server for you to access.
You don’t even need an expensive GPU to run basic CUDA applications.
Yeah, I don’t think there exists anything online that can do this.
Maybe talk to one of the server farm companies and see if they can provide a service like this?
NIMBIX/JARVICE provides a service similar to what you are looking for. They have servers with CPUs/FPGAs/GPUs installed that you can access and work on (for a price obviously) but you might be able to convince them that there are enough people looking for a CUDA platform on their cloud service that they might create one. That would solve your issue but would cost money upfront obviously, where it might end up being cheaper to just buy GPUs, but ease of access seems to be what you are looking for.
While this forum is NVIDIA/CUDA based, a wise option might be looking into teaching the class OpenCL instead? The two languages are relatively similar with the fact that OpenCL can run on CPUs as well as GPUs. This makes it more accessible for your class as I’m assuming everyone has a laptop or something similar to program with. This way you can still teach the basics, similar to CUDA, but have something to actually program in/with.
This is not as cost effective for long running compute as simply buying your own (say) GTX 1060 GPU, but it could work well for a class with dozens (or hundreds) of students who might put in say 100 hours of time during a class to learn CUDA, costing only about $25.
Most online CUDA classes use AWS GPU instances, which are not hard to setup.
I may be very biased, but I do recommend CUDA over OpenCL for a number of reasons;
2/3 of GPU-related academic papers use CUDA over OpenCL
Most GPU based and hybrid CPU/GPU supercomputers use NVIDIA GPUs usually with CUDA
NVIDIA GPUs have the majority of marketshare of GPUs
The software ecosystem for CUDA is much much better than that for OpenCL, and this particularly applies to AI, Machine Learning, Sparse/Dense Linear Algebra sub-routines, sorting and signal processing
OpenCL has much more ‘boilerplate’ code related to the CPUs, GPUs or other processors available in a system, which makes the learning more difficult and the code more verbose
IMO performance for the more basic algorithms (matrix multiplication, sorting, FFTs, image processing) tends to be better when implemented in CUDA over OpenCL
NVIDIA tends to support CUDA development much more than AMD and the OpenCL people support OpenCL
In particular the US government has made a very large bet on CUDA over OpenCL (Department of Energy, Department of Defense) and companies such as Tesla, SpaceX and Facebook also use CUDA over OpenCL. This fact will definitely matter to students who want to get into HPC.
I don’t disagree with you, and would definitely say that CUDA > OpenCL. I was just stating that teaching OpenCL over CUDA might be more accessible.
My only quip is regarding HPC/SERVERS. FPGAs (Xilinx/Altera(Intel)) are making a (substantial) push into this market and Altera (now Intel) is making a push using CPUs paired with FPGAs. Now if you want the best performance you can get, you’re going to learn or hire someone who knows VHDL or Verilog, but to a beginner, programming in OpenCL is much preferred. FPGAs companies are making a push for OpenCL compatibility to make their devices more accessible. So while I will agree with you the CUDA is preferred (currently) and the incumbent, OpenCL may return when BW/watt becomes more important (which I think it will, unless GPU technology drastically changes and improves in this metric.)
I mean I know AMD is moving in the direction of HIP to convert CUDA into code that can be also executed on AMD GPUs (as well as other devices.)
Now, FPGA companies may end up learning that OpenCL is losing support and do something similar to AMD where they convert CUDA, or even maybe end up using HIP in some manner.
Honestly, I’m not exactly sure how this ends up playing out over the next few years, but I’m perfectly content learning CUDA and using that over OpenCL.
Unfortunately, my job requires that I know both, as we use GPUs and FPGAs for many of our applications.
Anyways, this is becoming off topic and therefore will be my last post regarding it.
I also want next semester to offer a course so called: Introduction to Parallel Programming via CUDA, and as mentioned I could well buy a NVIDIA card and install in my system, however the students would be tied to that computer … most likely locate at some university lab which is contrary to normality of nowadays of developing either very locally or over the internet. Again we could give some online access to our students and so on … but WOULD BE VERY VERY NICE IF THERE WAS SUCH POSSIBILITY OVER THE INTERNET.