Question about CUDA memory allocation

Hi, I am confused about how the compiler can know that the address is form GPU memory rather than CPU memory,such as
float * device_a;
cudaMalloc((void **)&device_a, size);

I wonder how can the compiler know that the address in device_a is from GPU memory? Is there any mark in the returned address? Thanks for your attention.

It can’t (at least pre-4.0). So it is your responsibility to only use it as a device pointer.

Thanks, But I think the address returned from cudaMalloc() is diffirent from the address of host memory. Such as, you can not directly use the address in host but only use it as a device pointer.