GPUDirect RDMA - Module can not be insert into kernel


We are checking this issue with our internal team.
Will share more information with you later.


1 Like

For jetson-rdma-picoevb, how are you compiling kernel module it? I mean as iGPU or dGPU.


We can find the nvidia_p2p_get_pages symbol in the kernel_src.tbz2 or r35.1.
Could you please check it again?

$ grep -ir nvidia_p2p_get_pages
kernel/nvidia/drivers/nv-p2p/nvidia-p2p.c:int nvidia_p2p_get_pages(u64 vaddr, u64 size,
kernel/nvidia/include/linux/nv-p2p.h:int nvidia_p2p_get_pages(u64 vaddr, u64 size,
kernel/nvidia/include/linux/nv-p2p.h: *   Map the pages retrieved using nvidia_p2p_get_pages and



We just got some feedback from our internal team that t nv-p2p.ko and nvidia.ko cannot be used together.
Do you want to use them at the same time?



I want to do p2p and have display output simultaneously. I would assume I need both for that. Am I correct?

nvidia.ko, nvidia-modeset and nvgpu.ko are responsible for the display to work fine.

Is nvidia.ko required to run programs using Cuda ?

The jetson-rdma-picoevb is build with the script for the iGPU of the Jetson on the Jetson itself.

1 Like

Hi, both

The same symbols are defined in both nvidia.ko and nv-p2p.ko.
So they cannot be added to the kernel at the same time.

nvidia.ko was only loaded for the dGPU use case.
That’s why we don’t expect it will be loaded when designing the nv_p2p.ko.

We are double-checking if nvidia.ko is required for Orin’s functionality.
Could you also test if it works by only adding the nv-p2p.ko into the kernel?



The nvidia.ko is used for display from Orin.
It is also possible to affect some functionality that requires the graphic driver. (ex. argus)


I have already verified that loading only nv-p2p.ko works. See (GPUDirect RDMA - Module can not be insert into kernel - #10 by DigPat)

I’m not sure I understand… but lets start with my goal.
I have a card with an onboard FPGA connected to the PCIe slot on the ORIN. I want to do peer 2 peer data transactions using functions defined in nv-p2p.h to the iGPU memory using Cuda.

  1. Is this possible?
  2. What module should I load?
  3. Can I also have display output at the same time?

Thanks for the suggestion. At least I can load my PCIe device driver kernel module now.

Looking forward to hear from nvidia for a fix.


Sorry for the missing.

Currently, nvidia.ko and nv-p2p.ko cannot be added together due to the symbol issue.
Without nvidia.ko, the graphic driver cannot be loaded, so the display and some feature that requires the graphic driver won’t work.

But a pure CUDA application should not be affected.

We are discussing the possible fix with our internal team.
Will share more information if we got more feedback.

The answer to your question:

  1. Yes
  2. nv-p2p.ko only
  3. Unfortunately no.



Thanks for clarifying. I hope you can find a solution soon.

Has there been any progress with this issue?
I have ran into the same scenario as the previous users.
While being able to use the PCI-e RDMA functionality is great, not being able to simultaneously display anything is problematic.

Any updates would be appreciated!

1 Like


We have passed this request to our internal team.

Since it requires modification in both nvidia.ko and nv-p2p.ko, it won’t be a quite fix.
We will update more information with you.

Currently, the workaround need to turn-off nvidia.ko module.

1 Like

So I possibly have a “quick” workaround that may work. I will need to test further.

On the Jetson Orin, I downloaded public_sources.tbz2. I then modified the source for nv-p2p.c & nv-p2p.h located here:


I changed all the exported function names.
For example from this:


To this:


I then rebuilt nvidia.ko using make in


After a successful build, I replaced the original nvidia.ko located at


With my newly built nvidia.ko module.

Afterwards, I loaded the nv-p2p module:

$ sudo insmod /lib/modules/5.10.104-tegra/kernel/drivers/nv-p2p/nvidia-p2p.ko

and finally I was able to load the picoevb-rdma module

sudo insmod picoevb-rdma.ko

Hopefully this helps.


1 Like


Thanks for sharing this.

We are double-checking if this can work as expected with our internal team.
Will let you know the feedback and thanks again for sharing.

1 Like


The workaround should work.
Thanks for sharing this.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.