How to enable VF multi-queue for SR-IOV on KVM?

I have successfully enable SR-IOV on kvm for ConnectX-3 with KVM (InfiniBand).Speeds up to 28.6Gb/s between guest hosts by using the iperf tool,but speeds up to 14Gb/s between virtual machines.I found that although the virtual machine shows multiple queues in /proc/interrupts,only one queue is actually available.I have configured smp_affinity and disable the irqbalance service.How to enable VF multi-queue for SR-IOV on KVM?

Thanks !

vm host:

[root@host-09 ~]# cat /proc/interrupts | grep mlx4

45: 106 52 58 59 59 59 54 55 PCI-MSI-edge mlx4-async@pci:0000:00:07.0

46: 2435659 2939965 41253 26523 49013 59796 56406 70341 PCI-MSI-edge mlx4-1@0000:00:07.0

47: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-2@0000:00:07.0

48: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-3@0000:00:07.0

49: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-4@0000:00:07.0

50: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-5@0000:00:07.0

51: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-6@0000:00:07.0

52: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-7@0000:00:07.0

53: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-8@0000:00:07.0

54: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-9@0000:00:07.0

55: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-10@0000:00:07.0

56: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-11@0000:00:07.0

57: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-12@0000:00:07.0

58: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-13@0000:00:07.0

59: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-14@0000:00:07.0

60: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-15@0000:00:07.0

61: 0 0 0 0 0 0 0 0 PCI-MSI-edge mlx4-16@0000:00:07.0

[root@host-09 ~]# cat /proc/irq/46/smp_affinity

02

[root@host-09 ~]# cat /proc/irq/47/smp_affinity

04

[root@host-09 ~]# cat /proc/irq/48/smp_affinity

08

[root@host-09 ~]# cat /proc/irq/49/smp_affinity

10

[root@host-09 ~]# cat /proc/irq/50/smp_affinity

20

[root@host-09 ~]# cat /proc/irq/51/smp_affinity

40

[root@host-09 ~]# cat /proc/irq/52/smp_affinity

80

[root@host-09 ~]# cat /proc/irq/53/smp_affinity

01

[root@host-09 ~]# cat /proc/irq/54/smp_affinity

02

[root@host-09 ~]# cat /proc/irq/55/smp_affinity

04

[root@host-09 ~]# cat /proc/irq/56/smp_affinity

08

[root@host-09 ~]# cat /proc/irq/57/smp_affinity

10

[root@host-09 ~]# cat /proc/irq/58/smp_affinity

20

[root@host-09 ~]# cat /proc/irq/59/smp_affinity

40

[root@host-09 ~]# cat /proc/irq/60/smp_affinity

80

[root@host-09 ~]# cat /proc/irq/61/smp_affinity

01

[root@host-09 ~]# ls -la /sys/class/net/ib0/queues/

total 0

drwxr-xr-x 4 root root 0 Jun 26 12:11 .

drwxr-xr-x 5 root root 0 Jun 26 12:11 …

drwxr-xr-x 2 root root 0 Jun 26 12:11 rx-0

drwxr-xr-x 3 root root 0 Jun 26 12:11 tx-0

Hi,

How did you set your irq affinity ?

Did you try to use set_irq_affinity_bynode.sh script ?

Try again and let me know

Marc

As I wrote above, I have manually set up irq in the virtual machine. You can see that there is only one queue in the hardware queue. I guess the driver does not support SR-IOV multi-queue function in the virtual machine.

VM:

[root@host-01 ~]# ls -la /sys/devices/pci0000:00/0000:00:04.0/net/ib0/queues/

total 0

drwxr-xr-x 4 root root 0 Jun 29 10:11 .

drwxr-xr-x 5 root root 0 Jun 29 10:11 …

drwxr-xr-x 2 root root 0 Jun 29 10:11 rx-0

drwxr-xr-x 3 root root 0 Jun 29 10:11 tx-0

Guest Host:

[root@testserver-1 ~]# ls -la /sys/devices/pci0000:80/0000:80:01.0/0000:81:00.0/net/ib0/queues/

total 0

drwxr-xr-x 35 root root 0 Jun 28 19:59 .

drwxr-xr-x 5 root root 0 Jul 10 10:51 …

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-0

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-1

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-10

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-11

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-12

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-13

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-14

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-15

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-2

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-3

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-4

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-5

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-6

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-7

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-8

drwxr-xr-x 2 root root 0 Jun 28 19:59 rx-9

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-0

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-1

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-10

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-11

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-12

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-13

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-14

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-15

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-16

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-2

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-3

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-4

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-5

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-6

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-7

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-8

drwxr-xr-x 3 root root 0 Jun 28 19:59 tx-9

Hi,

Please open a support ticket.

Best Regards

Marc

Where to open a technical support?