Hello everyone. I have a question, I am writing CUDA and I have used share memory, I’m confused that if I want to continue improving performance, do i need to use constant memory for my parameter, because I found constant memory is as same as global memory, they are off-chip.Why does CUDA need constant memory!?? Thanks for any responses.
Constant memory is best for cases when a large number of threads need to access single or few memory cells. It uses broadcasting - value of the memory cell is read once and spread amongst half-warp of threads (as far as I remember, please correct me if I’m wrong).
What do mean by parameter?
Hi, because I have some constant, but I use global memory pointer for my function.
[b]I have another problem, I try to use constant memory and I wrote following code
__constant__ int M;
int M_Host = 5;
cudaMemcpyToSymbol(&M,&M_Host , sizeof(M));
In my kernel, I found M didn’ get any value from “cudaMemcpyToSymbol”, what happened with them?? My compilation is successful.[/b]
Shouldn’t be like this
// after the headers ___constant__ int M; .... // inside the code cudamemcpytosymbol(M,&M_Host,sizeof(int));
When a variable is declared in this manner it is global and it can be access directly from all kernels without needing to use it as an argument.