How to couple Connectx with FPGA?

Hi all,

my excuses in case this discussion is misplaced. I’m a new member of this community, but I expect o be here more often in the future.

I’m currently working on a concept and realization for a high-data-rate-acquisition system based on a cluster and Infiniband as an interconnect. Nothing new up to this point. The big thing with our system will be the data rate for data acquisition. Here at University, we’re going for a large research project for signal processing topics at THz spectrum. Our concept includes two clusters (Infiniband, MPI, GPU) as well as input/output components for high-data-rate-aquisition (FPGA based). Each compute-node will be used as an input/output node for data with data rates up to 5 GBytes/s per node. The whole system will scale up with parallel instances of cluster-node/FPGA entities ending in system data rates of more than 80 GBytes/s. Lot of data to be transmited and stored in short time. Data will be processed offline.

I have some concerns regarding data input/output processing on node base. How to get the data into the cluster-nodes? Big question. Up to now, there is the solution of using PCIe bus for connecting compute-node with FPGA board. Surely, this will work, but still an ‘old-fashioned’ way without using RDMA, Infiniband etc. I’ve looked around to find a suitable solution for integrating Infiniband and/or Converged Ethernet as a technology to couple with FPGA data stream. To note the result: COTS devices are not available.

Currently, we’re a little bit lost. On FPGA (UltraScale+) we could make use of a 100 GbE Xilinx transceiver. But how to get this transceiver working with Mellanox Connectx? I’m not really sure that Connectx and Xilinx 100 GbE transceiver can communicate with each other- though both explain comformity for 100 GbE. In addition, support by Xilinx for upper transport layers are missing. Not to mention RDMA or RoCE.

Does anybody have experience with such an interconnect based on 100 GbE Ethernet? The 100 GbE transceiver from Xilinx FPGA (UltraScale+) comes out-of-the-box with less or no support. What about hardware offloading? Xilinx seems to support this …

Second concept is to use Infiniband as an interconnect for compute-node/FPGA coupling. Then it would be very helpful to get an Infiniband IP core for FPGA for at least FDR (5 GBytes/s transfer rate!). Could someone provide me some buisiness contacts for this?

Third concept could be to develop an integrated solution, FPGA plus Connectx silicon on PCIe interface card. This seems to be the most expensive solution wrt time and effort.

Would be very helpful to get some supporting answers from the community.

Best regards

Michael

Crossfield Technology LLC (www.crossfieldtech.com) has an Instrumentation Gateway that connects a Mellanox ConnectX-3 to a Xilinx Virtex 7 through a PCIe switch. We implement OFED under embedded Linux running on a NXP QorIQ P4040 also connected to the PCIe switch, and we use FPGADirect to perform RDMA transactions directly into FPGA memory similar to GPUDirect. The Instrumentation Gateway implements dual VITA 57.1 FMC slots for interfacing to data acquisition FMC modules. We are planning to migrate this design to Ultrascale+ in the future. Please contact info@crossfieldtech.com mailto:info@crossfieldtech.com if you have any questions.

Thanks,

Brett

Hi Michael,

Mellanox has Innova Flex card which is fit to most of you requirements.

It is FPGA plus Connectx silicon on PCIe interface card and it supports RDMA and RoCE.

Current generation of the card supports 40Gb per port.

Please check if it can work for you.

Thank you,

Vladimir

Hi Vladimir,

thanks for your answer.

I’ve studied the product description of Innova Flex before opening the discussion. But Innova Flex does not support our concept since it is mainly constructed for packet manipulation/inspection. That means, data received via IB/Ethernet can be analysed and actions depending on packet content can be conducted.

In our concept, the FPGA controls external devices and accumulates device traffic into 40 Gbit/s traffic stream to be transferred via IB. Innova Flex is missing such functionality and FPGA input/output ports open for external devices. Direct coupling of FPGA data stream with IB is not possible.

But, once more, thanks for your reply!

Best regards

Michael

Hi Michael,

I am building very similar system to what you described and bumped into the same problem.

Have you been able to integrate RDMA into Xilinx FPGA?

Thanks,

David.