How to register a dmabuffer fd space to CUDA Device?

Hello Nvidia:
1,create buffer: NvBufferCreateEx(&dma_fd_0, &input_params)
2,map buffer:NvBufferMemMap(dma_fd_0, 0, NvBufferMem_Read_Write, &sBaseAddr[0])
3,register buffer: cudaHostRegister(sBaseAddr[0],BUF_LEN ,cudaHostRegisterPortable)

My question are:
1,cudaHostRegisterPortable as pinned memory by all CUDA contexts even is not allocated by cudaMalloc?
2,what is the difference from cudaHostRegisterPortable and cudaHostRegisterMapped? actually cudaHostRegisterMapped failed when call cudaHostRegister.
3,How device get to know the space is allocated but not continuous memory area?(follow_pages)
4,To avoid memcopy from cuda to cpu and cpu to cuda,dose nvidia has some shared buffer examples between cuda and cpu?

thanks
bin

Hi,

1. For Xavier, you can make a pinned buffer to be GPU-accessible with cudaHostRegister().

2. Please find a document below:
https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__MEMORY.html#group__CUDART__MEMORY_1ge8d5c17670f16ac4fc8fcb4181cb490c

3. The buffer needs to be a page-lock memory

4. You can try pinned host memory or unified memory.
Below is a document for your reference:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.