So I have some code in C that will create a 3D particle mesh. Basically, a 3D array of nodes which contain a pointer to a particle structure.
It works fine in C and on my CPU but I want to try and run this on my GPU.
So I just said, “F*** it” and changed my ending tag from .c to .cu and lo and behold, it compiled. However, I wasn’t using cudaMalloc and was just using malloc so I’m assuming my code was being written to my system RAM and not GPU RAM which is absolutely fine.
But here’s the thing, the “nvcc -o test2 test.cu” passed and created the test2 executable. So when I type “./test2” does it really run on my GPU, writing to system RAM and then freeing it normally? Or did I just do nothing to modify my original code, essentially?
I don’t think I need to post the code, I’m just wondering that if I have working C code then compile it with nvcc, does the resulting executable actually run off of my GPU?
Like, is a GPU thread only run if its written to the GPU’s pool of memory or can a GPU be all like, “Pfft, system RAM’s good enough, let’s thread this s***.”?