Constant memory on multiple GPU in unified memory environment

We have developed a code that utilizes multiple gpu’s. We have 2 devices per cpu. We would like to use constant memory, however it is a bit unclear to us, how to have constant memory on both cards and distinguish between the two.

Our code looks like this:

#include <cuda.h>

#include <stdio.h>

__constant__ float test_ptr[10];

void checkErr()


        cudaError err = cudaGetLastError();

        if (err != cudaSuccess) {

                fprintf(stderr, "Cuda error: %s.\n", cudaGetErrorString(err));




int main(int argc, char *argv[])


cudaDeviceProp prop;

    cudaGetDeviceProperties(&prop, 1);

    printf("device 1 has unified addressing: %d\n", prop.unifiedAddressing);


    cudaGetDeviceProperties(&prop, 0);

    printf("device 0 has unified addressing: %d\n", prop.unifiedAddressing);



    void *ptr;

    cudaGetSymbolAddress(&ptr, test_ptr);


    printf("Device 0: symbol address: %p\n", ptr);


    cudaGetSymbolAddress(&ptr, test_ptr);


    printf("Device 1: symbol address: %p\n", ptr);

/* cudaPointerAttributes attr;

    cudaPointerGetAttributes(&attr, glbptr);


    printf("Device 0: pointer %p finds itself at device %d\n", glbptr, attr.device);


return 0;


When we run this, we get the same pointer back for both GPU. However, we expected to see two different pointers, as the memory is unified. This code was tested on GTX480 cards and on M2070 cards, using both cuda 4.0 and cuda 4.1.

output snippet:

device 1 has unified addressing: 1

device 0 has unified addressing: 1

Device 0: symbol address: 0x20200000

Device 1: symbol address: 0x20200000

Any help in this is highly appreciated

According to section 3.2.7 of the Programming Guide, the unified address space is used for allocations made via cudaHostAlloc() or any of the cudaMalloc*() functions, i.e., not for static allocations.