How to enable DMA_BUF support for Tesla V100/RTX 3090?

Hi,

I am currently working on the code of NVSHMEM. When I tried to run the demo program: nvshmem-3.2.5/bin/perftest/device/pt-to-pt/shmem_put_latency, I found out that DMA_BUF is not supported for neither Tesla V100 or RTX 3090.

The key code for the detection of support of DMA_BUF in NVSHMEM program should be here:

Support for DMA_BUF is queried through the CUDA API. And the query result is False.

I even wrote a small program to detect the support for DMA_BUF:

#include <stdio.h>
#include <cuda.h>
int main() {
  CUdevice dev; int flag=0; CUresult s;
  cuInit(0); cuDeviceGet(&dev, 0);
  s = cuDeviceGetAttribute(&flag, (CUdevice_attribute)124/*DMA_BUF_SUPPORTED*/, dev);
  if (s != CUDA_SUCCESS) { const char *n,*str; cuGetErrorName(s,&n); cuGetErrorString(s,&str);
    printf("query failed: %s | %s\n", n, str); return 0; }
  printf("DMA_BUF_SUPPORTED=%d\n", flag);
  return 0;
}

But the output result is also “not supported”.

I also learned here that DMA_BUF is only supported by open-source nvidia driver, so when I reinstalled the nvidia driver, I selected “MIT/GPL” kernel module, but it still does not support DMA_BUF.

I have no idea why I cannot enable DMA_BUF support in my system. I need this technique to control GPU memory by Bluefiled-3 DPU. Is there any clues showing that Tesla V100/RTX 3090 GPU do not support DMA_BUF in hareware? Or does nvidia driver disable DMA_BUF support for these GPUs regarding product line consideration?

Here’s the details of my system for your reference:

  • Host system: Ubuntu 22.04 kernel version 5.15.0-135-generic.
  • Nvidia Driver Version: 570.124.06 (MIT/GPL kernel module)
  • CUDA Version: 12.8
  • GPU: Tesla V100 / RTX 3090
  • CPU: INTEL(R) XEON(R) GOLD 6530

Details of GPU:

  1. Tesla V100:

  1. RTX 3090:

I am looking forward for your advices. Any advice or insight is valued. Thanks a lot.