How to allocate contiguous GPU memory and get its physical address in latest CUDA?

I am now trying to allocate contiguous GPU memory and get the starting physical address on Ubuntu 20.04. Common memory allocation functions such as cudamalloc apparently do not do what I need. Does such an api exist in the latest cuda to help do my job?

This means:

  1. allocate contiguous GPU virtual memory as well as physical memory
  2. get the starting physical memory address

Can anyone answer my doubts with code? Thanks!

Cross-post on Stackoverflow

I have read this blog before, but it does not solve my problem:allocate contiguous GPU memory and get its physical address. Do you have any other solutions? Thanks.

That’s my question too, but can someone answer me directly?

This could help: nvidia_p2p_get_pages() GPUDirect RDMA :: CUDA Toolkit Documentation (Pinning GPU memory). Normally used from kernel drivers.

Did you find a solution to your problem?
I am also currently looking for a way to allocate physically contiguous memory on 30 series GPU, but could not find a good solution. Obtaining the physical addresses to the memory would be great as well.