GPUDirect memory pinning possible on Fermi?

Is GPU memory pinning supported on Fermi, or only on the Kepler architecture?
(Clarification: the question is about GPU memory pinning, not host memory).

Specifically, the call to cuPointerGetAttribute(&tokens, CU_POINTER_ATTRIBUTE_P2P_TOKENS, gpu_mem) fails with CUDA_ERROR_INVALID_DEVICE on a GeForce GTX 580.

I’m asking because the RDMA for GPUDirect document has a fermi struct within the nvidia_p2p_page struct, which might imply this was possible.

On Ubuntu 12.04 x64 using a GTX690 and CUDA 5.0 we’re struggling with the same problem so this is not a FERMI only issue. We’ve only been at it for a few days, but are running out of ideas. I added cuPointerGetAttribute() to the simple P2P sample that runs fine on the 690, and neither the g0 nor g1 pointers return the tokens. We only get the invalid device error.

We are writing our own driver to move data from a custom sensor to the GPU and really need this functionality to proceed at full performance. Otherwise we have to copy the data twice and there’s a lot of it.

I believe that CUDA 5.0 only enables GPUDirect RDMA for Tesla video cards, and doesn’t enable it for GeForce boards (whether they be Fermi or Kepler).

Moreover, there are specific hardware limitations that prevent GPUDirect RDMA from working well on Fermi (see the work of APEnet+, which was the first NIC to support GPUDirect RDMA before it was known as such).

Yes you need CUDA 5.0 + pro GPU.