Infiniband's RDMA capability on Windows 8.1 with Intel MPI

We are deploying a cluster server (Windows 8.1 is installed) equipped with Mellanox Infiniband (Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE])

I am using Intel MPI library and trying to use RDMA (or DAPL) to take advantage of full bandwidth of infiniband.

I installed Infiniband driver from Mellanox’s webpage successfully.

  1. I cannot run my MPI benchmark application with DAPL enabled. I observe no good numbers from the benchmark using TCP/IP (IPoIB) (~ 300MB/sec over MPI)

  2. In a test where a machine as a server and another machine as a client, I can see they can send/receive messages upto 2~3GB/sec (which is the exactly what I want…)

If you have any knowledge on it , please share them with me

send/receive messages up to 2~3GB/sec seems to me a fair performance considering the fact you use a quiet old ConnecX-2 adapter over win8.1

Use the Performance Tuning Guidelines for Mellanox Network Adapters to see if you can stretch & and improve the figures