Jetson AGX Xavier IRQ over PCIe CPU affinity can't be changed

I’m using the Jetson AGX as a root port and a Xilinx dev board as PCIe endpoint.
PCIe connection is well established and data correctly exchanged.

I configured a user IRQ sent by the endpoint over PCIe and I’m using MSI-X interrupts.
By default, IRQ affinity is setup to the CPU0 core of the Jetson AGX. Here is a view of the IRQ associated with my PCIe driver :

cat /proc/interrupts | grep xdma
820: 0 0 0 0 0 0 0 0 PCI-MSI 0 Edge xdma
821: 0 0 0 0 0 0 0 0 PCI-MSI 1 Edge xdma
822: 144097 0 0 0 0 0 0 0 PCI-MSI 2 Edge xdma
823: 0 0 0 0 0 0 0 0 PCI-MSI 3 Edge xdma
824: 0 0 0 0 0 0 0 0 PCI-MSI 4 Edge xdma
825: 0 0 0 0 0 0 0 0 PCI-MSI 5 Edge xdma
826: 0 0 0 0 0 0 0 0 PCI-MSI 6 Edge xdma
827: 0 0 0 0 0 0 0 0 PCI-MSI 7 Edge xdma
828: 0 0 0 0 0 0 0 0 PCI-MSI 8 Edge xdma
829: 0 0 0 0 0 0 0 0 PCI-MSI 9 Edge xdma
830: 0 0 0 0 0 0 0 0 PCI-MSI 10 Edge xdma
831: 0 0 0 0 0 0 0 0 PCI-MSI 11 Edge xdma
832: 0 0 0 0 0 0 0 0 PCI-MSI 12 Edge xdma
833: 0 0 0 0 0 0 0 0 PCI-MSI 13 Edge xdma
834: 0 0 0 0 0 0 0 0 PCI-MSI 14 Edge xdma
835: 0 0 0 0 0 0 0 0 PCI-MSI 15 Edge xdma
836: 0 0 0 0 0 0 0 0 PCI-MSI 16 Edge xdma
837: 0 0 0 0 0 0 0 0 PCI-MSI 17 Edge xdma

The user IRQ is the #822 and I’d like to change its affinity to another CPU core. But the following command return an error :

root@nvidia:/home/nvidia# echo 4 > /proc/irq/822/smp_affinity
bash: echo: write error: Input/output error

I also try to change affinity in Kernel space through the driver adding the following command :
irq_set_affinity_hint(vector, cpumask_of(4));
But it doesn’t work either…

I’ve checked that the Kenerl is built with CONFIG_REGMAP_IRQ=Y and it is the case.

How can I change these IRQ affinity ?

I’m afraid that the affinity of the PCIe’s IRQs can’t be changed in AGX. They are always scheduled on CPU0

For my application, I’m trying to reduce latency following user MSI-X IRQ which are received every 1ms.

The latency observed is in major cases acceptable (saying less than 100us) but in rare situation, several 100us and sometimes more than 1ms which is not acceptable at all…

If the affinity of these IRQ can’t be changed, do you have any advices that could help control this latency ? (for information the Ubuntu Kernel I use is already Preempt-RT patched).

The affinity of a single MSI interrupt can’t be changed, but the all the interrupts can be changed by changing the CPU affinity of tegra-pcie-msi interrupt. There are 2-4 entries of it on AGX Xavier, so you may have to find which one affects it.

It would have been great if each MSI interrupt’s affinity can be changed but that’s not possible on Jetson boards.

Thanks for your answer.
The solution you gave works fine for me. I can change the affinity of my MSI-X user IRQ.

Unfortunately this does not reduce the latency I observe, but the problem must be somewhere else…

What is the latency requirement you are looking for? Since AGX Xavier has Carmel cores, the RT performance will be different compared to A57s in, say Jetson TX2.

Couple of things you can do:

  1. I am sure that you already are running an PREEMPT_RT kernel, but if not, please build and boot one.
  2. In the kernel cmdline, you can put isolcpus 4 in the extlinux.conf’s APPEND: line. Later, changing the IRQ affinity to 4 should help in reducing some jitters.

You can also refer to our product called RedHawk which is a RTOS. It has been released for all the Jetson products. We can be reached at: Contact Us | Concurrent Real-Time

To begin with, I’d like to obtain bounded values of latencies.

Indeed, I first use a MSI-X user IRQ than shows bounded latencies values, lower than 100us.
After receiving the user IRQ, I start a transfert of few KB from a FPGA to the Jetson.

This is on this second step that the latencies observed may be up to several milliseconds !
To understand, the average value seems high to me but could be acceptable (approximatly 100us).
However, plotting an histogram of observed latencies values show that in some very radar cases, latency could take huge values !
I’d tile to find a steup where theses latencies are correctly mastered and bounded.

Concerning your advices :

  1. Yes I already applied the PREEMPT_RT patch to the kernel
  2. I will have a try of what you describe.

Thanks for your answers !

Thanks, I tried isolating one cpu core to dedicate it the handle my IRQ ISR but I’m still experiencing some high transfer values.

I’ve opened another discussion in the forum to talk about this (PCIe IRQ latency unbounded value).