Patch needed to activate ROCEV2 for Connect 3X 10G card

Hi,

Could you please provide patch to activate ROCEV2 for Connect 3X 10G card. We are blocking on this. We cant install MLNX OFED stack. we want to use INBOX

Thanks

Rama

i don’t think so, is all the modules up ?

please make sure that you have the updated user space and all the modules is up like.

ib_cm,rdma_cm,ib_umad,ib_uverbs,ib_ucm,rdma_ucm,mlx4_ib

could you please tell me the operation system. libibverbs and libmlx4 version ?

Yours,

Talat

we are using ConnectX3-pro with MLNX OFED stack, it is detecting ROCEV2 properly.

But we are not getting ROCEV2 parameter with 4.8 kernel without MLNX OFED stack.

I want to work with Connect X3 pro with ROCEV2 with out Mellanox OFED stack.

is it possible?

Thanks Talat,

I can able to use connect 3X pro in ROCEV2 mode with inbox OFED.

One correction in your document

echo RoCE V2 > default_roce_mode

We need to use v2 (small v)

Thanks

Rama

RHEL 7.0 , kernel 4.7

part number cx314A-BCCT

connect-3 pro

The kernel side support RoCE V2, just please make sure that you have the latest upstream infiniband user space like libibverbs and libmlx4.

you can follow the user manual of Inbox driver for RHEL 7.3 http://www.mellanox.com/pdf/prod_software/Red_Hat_Enterprise_Linux_(RHEL)_7.3_Driver_User_Manual_1.0.pdf http://www.mellanox.com/pdf/prod_software/Red_Hat_Enterprise_Linux_(RHEL)_7.3_Driver_User_Manual_1.0.pdf

see “Default RoCE Mode Setting” section.

FYI, by default both of the RoCE modes is enabled, just need to set the default one.

Hi Talat,

I have used kernel 4.8, after that if i give

ls /sys/modules/mlx4_core/parameters I did not find any entry related to RoCE mode.

Please tell how we can view gid’s. what is the location. Also, please let me know the command to check the support in userspace libraries.

There is no roce_mode parameter in upstream, RoCE V2 is enabled by default.

GID Table in sysfs

GID table is exposed to user space via sysfs

  1. GID values can be read from:

/sys/class/infiniband/{device}/ports/{port}/gids/{index}

  1. GID type can be read from:

/sys/class/infiniband/{device}/ports/{port}/gid_attrs/types/{index}

  1. GID net_device can be read from:

/sys/class/infiniband/{device}/ports/{port}/gid_attrs/ndevs/{index}

Yours,

Talat

Hi Talat,

The above entries are only visible with MLNX_OFED stack. we are not getting with kernel 4.8 with INBOX OFED.

Even the below step giving error.

cd /sys/kernel/config/rdma_cm

mkdir mlx4_0

Thanks

Rama

Hi Talat,

The above procedure worked for me 10G OFED inbox driver. But 40G I am getting issue

I am using MCX314A-BBCT 40G ConnectX -3 pro card. I am trying to change the card to RoCEv2 mode with INBOX driver. as per the below document

But it is failing to change RoCE v2.

RoCE v2 Considerations https://community.mellanox.com/s/article/roce-v2-considerations

[root@xhdipsspdk1 ~]# echo “RoCE v2” > /sys/kernel/config/rdma_cm/mlx4_0/ports/2/default_roce_mode

-bash: echo: write error: Invalid argument

echo “RoCE V2” > /sys/kernel/config/rdma_cm/mlx4_0/ports/1/default_roce_mode

-bash: echo: write error: Invalid argument

Is it possible to change mode RoCEv2 with this card?

Thanks

Rama

Hi Rama,

could you please tell me the device type, operation system and the kernel version that you are using.

for your information, RoCE v2 is supported ONLY in ConnectX®-3 Pro and ConnectX®-4 adapter cards.

please make sure that your device is ConnectX3-pro and not ConnectX3.

Thanks,

Talat

Yes, it’s possible.

The RoCE V2 support accepted at kernel v4.5. commit 3f723f42d9d625bb9ecfe923d19d1d42da775797

check the gid’s that generated, should be there RoCE V2 gids.

please make sure that your user space libraries support.

In order to work with RDMA_CM use the configfs

mount -t configfs none /sys/kernel/config

cd /sys/kernel/config/rdma_cm

mkdir mlx4_0

cd mlx4_0

echo RoCE V2 > default_roce_mode

cd …

rmdir mlx4_0

Hi,

Could you please tell me the Operation system, then i can check if ths inbox support RoCE V2 or not.

Thanks,

Talat