Hi all,
Sorry to bother the forums again but I have a new headache. I’m using cudaMemcpyToSymbol to write a global variable from a C program to GPU memory so it can be accessed by any kernel function. Here’s the setup (function_init() is called from main in main.c if that matters):
globals.h:
Real Var;
__constant__ Real Var_dev;
function.cu:
#include globals.h
function_init(){
cudaErrorCheck(cudaMemcpyToSymbol(Var_dev, &Var, sizeof(Real)),
"cudaMemcpyToSymbol");
printf("Var_host = %f\n", Var);
printf("Var_dev = %f\n", Var_dev);
}
This prints out:
Var_host = 1.500000 //The correct host value
Var_dev = 0.000000
Additionally, whenever I look at Var_dev from inside a kernel call with cuprintf, it also returns the value 0.000000. Am I implementing cudaMemcpyToSymbol appropriately? I hope there isn’t a problem with leaving the declaration in globals.h because I would prefer to use this variable in multiple kernels in different files with just one initialization. cudaErrorCheck does not throw any errors – it really seems to be writing 0.000000.
Thanks!
S
EDIT: Poor typesmanship