RDMA doesn't work between host and DPU

I am experimenting this error while trying to use RDMA from the DPU to the host:

Command run on the DPU:

ubuntu@localhost:~$ ib_send_bw

************************************
* Waiting for client to connect... *
************************************
Failed to modify QP to INIT, ret=95
Failed to modify QP to INIT
 Unable to set up my IB connection parameters
 Unable to set up socket connection
ubuntu@localhost:~$ ib_send_bw

************************************
* Waiting for client to connect... *
************************************
Failed to modify QP to INIT, ret=95
Failed to modify QP to INIT
 Unable to set up my IB connection parameters
 Unable to set up socket connection

Command run on the host:

me@host:~/learning/grpc-login$ ib_send_bw 192.168.0.4
---------------------------------------------------------------------------------------
                    Send BW Test
 Dual-port       : OFF          Device         : mlx5_0
 Number of qps   : 1            Transport type : IB
 Connection type : RC           Using SRQ      : OFF
 PCIe relax order: ON
 ibv_wr* API     : ON
 TX depth        : 128
 CQ Moderation   : 1
 Mtu             : 1024[B]
 Link type       : Ethernet
 GID index       : 3
 Max inline data : 0[B]
 rdma_cm QPs     : OFF
 Data ex. method : Ethernet
---------------------------------------------------------------------------------------
ethernet_read_keys: Couldn't read remote address
 Unable to read from socket/rdma_cm
Failed to exchange data between server and clients

mst status output:

  ~ sudo mst status
MST modules:
------------
    MST PCI module is not loaded
    MST PCI configuration module loaded

MST devices:
------------
/dev/mst/mt4129_pciconf0         - PCI configuration cycles access.
                                   domain:bus:dev.fn=0000:e2:00.0 addr.reg=88 data.reg=92 cr_bar.gw_offset=-1
                                   Chip revision is: 00
/dev/mst/mt41686_pciconf0        - PCI configuration cycles access.
                                   domain:bus:dev.fn=0000:21:00.0 addr.reg=88 data.reg=92 cr_bar.gw_offset=-1
                                   Chip revision is: 01

IP addresses on the DPU:

ubuntu@localhost:~$ sudo ip -c a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: oob_net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:c0:eb:c0:c1:10 brd ff:ff:ff:ff:ff:ff
    altname enamlnxbf17i0
    inet 192.168.200.100/24 brd 192.168.200.255 scope global oob_net0
       valid_lft forever preferred_lft forever
    inet6 fe80::ac0:ebff:fec0:c110/64 scope link 
       valid_lft forever preferred_lft forever
3: tmfifo_net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:1a:ca:ff:ff:03 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.2/30 brd 192.168.100.3 scope global tmfifo_net0
       valid_lft forever preferred_lft forever
    inet 192.168.254.2/24 brd 192.168.254.255 scope global tmfifo_net0
       valid_lft forever preferred_lft forever
    inet6 fe80::21a:caff:feff:ff03/64 scope link 
       valid_lft forever preferred_lft forever
4: p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 08:c0:eb:c0:c1:0a brd ff:ff:ff:ff:ff:ff
    altname enp3s0f0np0
    inet6 fe80::ac0:ebff:fec0:c10a/64 scope link 
       valid_lft forever preferred_lft forever
5: p1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 08:c0:eb:c0:c1:0b brd ff:ff:ff:ff:ff:ff
    altname enp3s0f1np1
6: pf0hpf: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether c6:a8:ba:c2:88:81 brd ff:ff:ff:ff:ff:ff
    altname enp3s0f0nc1pf0
    inet6 fe80::c4a8:baff:fec2:8881/64 scope link 
       valid_lft forever preferred_lft forever
7: pf1hpf: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 3a:ba:95:80:4d:72 brd ff:ff:ff:ff:ff:ff
    altname enp3s0f1nc1pf1
    inet6 fe80::38ba:95ff:fe80:4d72/64 scope link 
       valid_lft forever preferred_lft forever
8: en3f0pf0sf0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether e2:29:7e:f9:a9:58 brd ff:ff:ff:ff:ff:ff
    altname enp3s0f0npf0sf0
9: enp3s0f0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 02:d3:bb:cc:56:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.4/24 brd 192.168.0.255 scope global enp3s0f0s0
       valid_lft forever preferred_lft forever
    inet6 fe80::d3:bbff:fecc:564c/64 scope link 
       valid_lft forever preferred_lft forever
10: en3f1pf1sf0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 32:d2:69:d0:16:04 brd ff:ff:ff:ff:ff:ff
    altname enp3s0f1npf1sf0
11: enp3s0f1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 02:ca:c4:d3:e6:97 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ca:c4ff:fed3:e697/64 scope link 
       valid_lft forever preferred_lft forever
15: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 8a:70:d3:8b:17:26 brd ff:ff:ff:ff:ff:ff
16: ovs-br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 08:c0:eb:c0:c1:0a brd ff:ff:ff:ff:ff:ff

Do you know what possibly causes this error?
Thank you

Ensure that the latest BFB image has been pushed & that the firmware has been upgraded

Check that the DPU has the default ports & OVS configuration
(The bridge should show all relevant interfaces mapped to it)
Site Home - NVIDIA Networking Docs > DPU documentation

Make sure to use the same GID index on both side (show_gids) or -R (rdma_cm)

Check ibdev2netdev and check the devices you want to use (host/DPU) and use it at the CLI (Ie: -d)

At last, should you have a support contract, please open a support case and we will further assist you accordingly.