cudaMemcpyToSymbol writes 0 instead of value

Hi all,

Sorry to bother the forums again but I have a new headache. I’m using cudaMemcpyToSymbol to write a global variable from a C program to GPU memory so it can be accessed by any kernel function. Here’s the setup (function_init() is called from main in main.c if that matters):


Real Var;

__constant__ Real Var_dev;

#include globals.h


cudaErrorCheck(cudaMemcpyToSymbol(Var_dev, &Var, sizeof(Real)),


       printf("Var_host = %f\n", Var);

printf("Var_dev = %f\n", Var_dev);


This prints out:

Var_host = 1.500000 //The correct host value

Var_dev = 0.000000

Additionally, whenever I look at Var_dev from inside a kernel call with cuprintf, it also returns the value 0.000000. Am I implementing cudaMemcpyToSymbol appropriately? I hope there isn’t a problem with leaving the declaration in globals.h because I would prefer to use this variable in multiple kernels in different files with just one initialization. cudaErrorCheck does not throw any errors – it really seems to be writing 0.000000.



EDIT: Poor typesmanship

try passing cudaMemcpyHostToDevice as copy memcopy “kind”, and watch out: cudaMemcpyToSymbol requires an offset parameter before the kind parameter, which you want to be 0.

You could also try passing &Var_dev instead of Var_Dev.

Thanks for the reply – I tried adding ‘, 0, cudaMemcpyHostToDevice’ but this had no effect. Passing &Var_dev to cudaMemcpyToSymbol causes cudaMemcpyToSymbol to not return cudaSuccess. The values are still 0.000000 with cuprintf from kernel code, but I was wondering whether I could access Var_dev from the host machine. It does return 0.000000 with a regular printf…

Try [font=“Courier New”]“Var_dev”[/font] instead of [font=“Courier New”]Var_dev[/font] as the first argument.