cudaGetSymbolAddress not working

Hello,

I would like to use 2 variables to get the maximum values of an image but there is a proble.
Here is the code:

__device__ unsigned int i_MinImage;
__device__ unsigned int i_MaxImage;

.....

    void* ptr_min;
    void* ptr_max;

    cudaGetSymbolAddress((void **)&ptr_min, i_MinImage);
    cudaGetSymbolAddress((void **)&ptr_max, i_MaxImage);
    cudaMemset(ptr_min, 10, sizeof(int));
    cudaMemset(ptr_max, -10, sizeof(int));


    int iMin_host,iMax_host;
    cudaDeviceSynchronize();
    cudaMemcpy(&iMin_host, &ptr_min, sizeof(int), cudaMemcpyDeviceToHost);
    cudaMemcpy(&iMax_host, &ptr_max, sizeof(int), cudaMemcpyDeviceToHost);

I know that the code is doing nothing really interesting but I would like to show you my problem.
I just do a write and read variables from GPU. My Min_host/Max_host variable should be 10/-10 but I get a strange variables 16840000, …

I don’t understand why this append. I do other process like sobel filtering, and all works properly but this part send me back wrong data.

What happens when you run the code under control of cuda-memcheck, and add proper CUDA error checking?

The is no return from the error checking and no error in cuda-memcheck

Remember that cudaMemset(), like standard memset(), fills memory one byte at a time. In your case, you are filling four bytes. So the first variable, where you use a byte with value 10 == 0x0a, is set to 0x0a0a0a0a == 168430090.

How should this work properly?
The sizeof(int) shoudn’t correct that?

You cannot use cudaMemset() to initialize a 4-byte integer to an arbitrary pattern. cudaMemset(), like memset(), is designed to fill memory with repetitions of a single byte; see the C++ reference of your choice. You could use cudaMemcpy() to set the device variable from the host, or pass the desired value as a kernel argument.