how can I check if a GPU supports managed memory model?

I am new to CUDA, and am doing the online course on Accelerated Computing.

I tried to run an example in the online course on Accelerated Computing
on my Tesla C2075 GPU and it seems that the


returns a null pointer (I’m new to CUDA, still learning, and I guess when I see 0x0 in the content of the pointer variable in the debugger it means NULL ?).

Could it be that Tesla C2075 GPU does not support CUDA managed memory?

If so, how should I replace


In general, how can I check if a GPU support this managed memory model?

First of all, in my opinion, any time you’re having trouble with a CUDA code, you should be using proper CUDA error checking. (not sure what that is? just google “proper CUDA error checking”, take the first hit, and read it and apply it to your code). If you had been doing proper CUDA error checking, there wouldn’t be any mystery here, the cudaMallocManaged call would return “operation not supported” on your C2075.

Beyond that, you should also run your codes with cuda-memcheck

To learn how to program without the managed memory model, just study a CUDA sample code like vectorAdd

it’s not as simple as just replacing cudaMallocManaged with cudaMemcpy. cudaMallocManaged is replaced with host allocations, device allocations, and cudaMemcpy operations.

to check if your GPU supports managed memory, you can simply query its compute capability (managed memory, in some form, is supported on devices of compute capability 3.0 and higher). In addition, the call to cudaGetDeviceProperties reports a number of device properties associated with managed memory:

refer to such properties as:

          int managedMemory;
          int pageableMemoryAccess;
          int concurrentManagedAccess;

To learn how to program with cudaGetDeviceProperties, study the deviceQuery CUDA sample code.