How do I tell if my code is really running on my GPU?

Hello All,

So I have some code in C that will create a 3D particle mesh. Basically, a 3D array of nodes which contain a pointer to a particle structure.

It works fine in C and on my CPU but I want to try and run this on my GPU.

So I just said, "F*** it" and changed my ending tag from .c to .cu and lo and behold, it compiled. However, I wasn’t using cudaMalloc and was just using malloc so I’m assuming my code was being written to my system RAM and not GPU RAM which is absolutely fine.

But here’s the thing, the "nvcc -o test2 test.cu" passed and created the test2 executable. So when I type "./test2" does it really run on my GPU, writing to system RAM and then freeing it normally? Or did I just do nothing to modify my original code, essentially?

I don’t think I need to post the code, I’m just wondering that if I have working C code then compile it with nvcc, does the resulting executable actually run off of my GPU?

Like, is a GPU thread only run if its written to the GPU’s pool of memory or can a GPU be all like, "Pfft, system RAM’s good enough, let’s thread this s***."?

I think you have misunderstood how CUDA works. Nothing happens on the GPU unless you explicitly say so. Your renamed file is a valid CUDA program (since it is a valid C program), but one where no function calls are made to the GPU.

I would suggest you read chapter 2 of the CUDA C Programming Guide for a good introduction to the programming model.

Lol yeah, you’re right. I realized that yesterday and I finally found a good slideshow online that shows you how to call the GPU and allocate memory, etc. Thank you very much :)