Multi GPU question

I’m using this in my .cu file:
constant float ImageXValues[ IMAGE_VALUES_SIZE ];
constant float ImageYValues[ IMAGE_VALUES_SIZE ];

For a multi GPU environment - where I use a different thread per GPU, how should I define the constant memory per thread/GPU?
what I mean is if I had a pointer to the device memory, I’d need to keep one per thread/GPU, how do I do this for the constant memory?

Hope I’m clear enough :)


I think this is per GPU. Device will be specified by CUDA context which will be used to copy data to your constant memory, so probably no additional steps required.

Did you ever find a definitive answer for this? I am wondering exactly the same thing…

The conclusion in another thread which discussed this in more detail was “yes, each context sees a different copy of the constant variable”. You just have to be sure to load the constant variable from each host thread.


Its per GPU as AndreiB and seibert said. I simply created a .cuh file and included it from both .cu files. As seibert said you need to populate the constants/textures/whatever

per host thread against the appropriate GPU device.


Do you need to name the constants differently per thread?

How about cudaArray* and other basic type (like float*) defined in that header? I suppose it had to be defined seperately for different GPU?

I use regular C struct with all data members (float, ints…) and arrays for each GPU.

i.e. an array/stl map/whatever to keep this information on the host side per GPU and

then I pass the appropriate struct to the C method that calls the kernel.