Quick networking between Hyper-v guests

Hi, I hope this is a simple question, I just can’t find the answer, and a lot of the links in this forum are returning 404s now.

All I’m trying to do is have to hyper-v guests on the same Windows DataCenter 2025 host be able to communicate with each other as the memory transfer can handle and not be limited by the speed of the physical adapter.

To that end, I’m trying to use a ConnectX-7 adapter thinking that the RDMA mode should let them communicate in near real time limited only by memory speed. Well, I have RDMA enabled on them both (which wasn’t straight-forward, by the way), and they’re still limited by the 10gbit speed of the physical adapter or close to it.

This is a beefy machine ($30k+), and these are the only two guests running on it, so if there were any hardware bottlenecks, I’d be amazed.

To give you numbers, one guest is reading from an NVMe array providing 50 GB/s (capital Bytes) and the other guest is writing to a similar NVMe array that can write at about 40GB/s. But an SMB transfer between the two is about 200MB/s. So 0.05% the speed it should be.

1.How you test BW? By nd_write_bw?mlxndperf? etc?
Please follow below test and tuning,
https://docs.nvidia.com/networking/display/winof2v244/fabric+performance+utilities
https://docs.nvidia.com/networking/display/winof2v244/performance+tuning

2.For SMB and VM setup your test env.

How you attach NIC to VM by SRIOV VF or VMQ?
https://docs.nvidia.com/networking/display/winof2v244/virtualization

Have you verify SMD multichannel and RDMA tune on?
https://docs.nvidia.com/networking/display/winof2v244/storage+protocols

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.