Too much IRQ on only one queue with VMA

Hi,

We are using VMA in multiple processes on a box with profile=latency and we can see that each time we write on the socket an IRQ is triggered and they are all on the same IRQ 241 (mlx5_comp0@pci:0000:41:00.0).

Without VMA, we can see that IRQ are load balanced accross the different NIC queues and so different IRQ.

We tried to specify different values for VMA_RING_ALLOCATION_LOGIC_TX but do not see any changes.

How can me manage to have less IRQ and load balance them on different NIC queues/IRQ please?

Regards,

Laurent

Hello Danesi Laurent,

Although VMA is an open source product, its support requires a valid contract. In the case if your organization have such contract, feel free to open an official support ticket by writing to Networking-support@nvidia.com with providing all details of the issue - application details, how to reproduce, log files, etc. Try to reproduce it using sockperf or iperf application, that can send/receive unicast and multicast traffic. In addition, please provide if this is a first time issue, if there is any degradation in performance.

Without support contact, I would suggest to open a ticket on VMA page - https://github.com/Mellanox/libvma.

In the case if you are interested in purchasing the support contract, please contact us by e-mail : Networking-contracts@nvidia.com