The simplified context is: I have two servers A and B, where A has Tesla V100 GPUs, and a FPGA device is attached to B (say /dev/mem).
I know that we can call mmap() to map an address space on B, and call cudaHostRegister to make GPUs on B can access FPGA memory.
But unluckily, we don’t have GPUs attached on B, but both A and B have Mx-connect5 IB cards. I wonder if it’s possible for CUDA kernels on A to access this memory region like accessing local memory?
It seems that nv-peer-mem can achieve similar goals, but I want the cuda kernel to access the memory region use load/store directly.
Thanks for help!