Does PCIe prefetch memory impact the throughput?

We are using the ConnectX-6 NIC and the configuration is below,
Ethernet controller: Mellanox Technologies MT28908 Family [ConnectX-6]
Subsystem: Mellanox Technologies Device 0028
Flags: bus master, fast devsel, latency 0, IRQ 182, NUMA node 1
Memory at a0000000 (64-bit, prefetchable) [size=32M]
Expansion ROM at 9d000000 [disabled] [size=1M]
Capabilities: [60] Express Endpoint, MSI 00
Capabilities: [48] Vital Product Data
Capabilities: [9c] MSI-X: Enable+ Count=64 Masked-
Capabilities: [c0] Vendor Specific Information: Len=18 <?>
Capabilities: [40] Power Management version 3
Capabilities: [100] Advanced Error Reporting
Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
Capabilities: [1c0] #19
Capabilities: [230] Access Control Services
Capabilities: [320] #27
Capabilities: [370] #26
Capabilities: [420] #25
Kernel driver in use: mlx5_core
Kernel modules: mlx5_core

We have measured the RDMA throughput by varying the message size.
The results show that when the message size is more than 32 MB (i.e. 33, 34, 35 MB …), the throughput drops by ~50 Gbs.

So would like to know Is the prefetchable memory 32 MB which is listed in the above configuration has any impact on RDMA throughput.
If yes, then Is there any way to modify or disable it to confirm the impact on throughput?

Hello,

There is no relation between the bar size and the message size or the RDMA performance/throughput.
The throughput drop is seen due to a different reason, and the fact that both memory and message size is set to 32MB is just a coincidence.
You can confirm that by modifying the PF_LOG_BAR_SIZE firmware parameter and see that it is not impacting performance.
You can modify it with the mlxconfig tool (part of the MFT driver). The default setting is 5 (which is 2^5=32MB).

Best Regards,
Viki