Some advice to a kewbie

Hi.

I’m starting to work with GPU acceleration, and I may need some advice on a few topics.

  • Is there any need to track the temperature of the GPU to avoid possible damage?

  • As far I know, there are many languages that can be used with CUDA. I see that C can be used with no problem, also java have some libraries… Did they have same final performance? Is there some recommendation on this point?

Thanks a lot!

No. On actively-cooled GPUs, the dynamic fan control on the GPU will crank up the fan speed to prevent overheating, for passively-cooled ones this is the responsibility of the server enclosure Should the programmed temperature threshold be exceeded, the GPU will be throttled (slowed down) or turned off to prevent damage. In other words, this works just like your CPU’s thermal management.

CUDA is a subset of C++, and supports most (all?) C++11 features. Many other programming environments offer CUDA interoperability, so-called bindings, from Python to Matlab. An internet search will tell you whether CUDA bindings exist for your favorite programming language. Wikipedia provides a list of bindings but I can’t tell how accurate or up-to-date it is: https://en.wikipedia.org/wiki/CUDA#Language_bindings. If you work in a domain where Fortran is still a major programming language, there is CUDA Fortran which directly targets the GPU: https://developer.nvidia.com/cuda-fortran

So, that’s the point. Is it better to work with, let’s say C++, or work jCUDA? I mean, is there any performance penalty? I don’t mind to adapt myself to another language as the algorithm I have in mind is quite basic, neither use language-specific functions that cannot be found on most of languages.

On the other point, now is totally clear.

Thanks.

Define “better”. My programming background is on the C/C++ path, so CUDA itself is the perfect fit for me. Also, the work I typically do readily lends itself to use of a language in the C/C++ family.

Your programming background, and your current environment, may be completely different. You would want to make a choice based on your background and environment. For example, in some domains many people use Java, in others they use Python, yet others predominantly use Matlab, etc. If you work in those domains, it seems useful to approach CUDA from within the context of those programming environments.

Most people pick up parallel programming to speedup specific processing tasks in their domain, and GPUs are a cost-effective way to do that. That might involve relatively little actual CUDA programming, and mostly using the many performance libraries that are part of the CUDA ecosystem.

If you just want to get exposure to the nitty-gritty of parallel programming, straight CUDA is probably going to work as well as any other approach. You get to see the nuts and bolts up close, so to speak. That is useful background knowledge no matter how you branch out later.

I mean, if there’s any penalty if using one or another language, but I think using C/C++ will be perfect for my porpoise.

Thanks for all.