I’m using the Jetson AGX as a root port and a Xilinx dev board as PCIe endpoint.
PCIe connection is well established and data correctly exchanged.
For my application, about 20 kB of data must be read from FPGA DDR to Jetson every milliseconds.
I configured a user IRQ sent by the endpoint over PCIe to inform Jetson that data are available.
I’m using MSI-X interrupts.
By default, IRQ affinity is setup to the CPU3 core of the Jetson AGX.
Here is a view of the IRQ associated with my PCIe driver :
cat /proc/interrupts | grep xdma
820: 0 0 0 0 0 0 0 0 PCI-MSI 0 Edge xdma
821: 0 0 0 0 0 0 0 0 PCI-MSI 1 Edge xdma
822: 144097 0 0 0 0 0 0 0 PCI-MSI 2 Edge xdma
IRQ #822 is the one raised when a user interruption is sent over PCIe from the FPGA.
IRQ #821 is called when reading data from FPGA (DDR)
I’m using a RT-PREEMPT patched kernel and I assagned IRQ thread’s to CPU core #3. (thanks to the “taskset” command).
I also set these thread’s priority to 80 (versus 50 by default for IRQ threads).
A last, I isolated the CPU core #3 adding isolcpus = 3 to the APPEND line in the file /boot/extlinux/extlinux.conf.
Thus, I’m expecting that cpu core #3 is only dedicated to handle these PCIe interrupts (user and read).
However, I’m experiencing some jitter but the most problematic to being very high latencies.
Indeed, if the average value is about 100us, the max observed value reach several milliseconds.
–> see the following latencies histogram which represents the transfer time of my data from FPGA to Jetson (about 20 kB every millisecond) :
Such latencies are unacceptable for my application and I must absolutely bound them.
Do you have any suggestions that could help bounding the transfer time ?