RTX Pro 6000 Blackwell does not advertise PCIe ATS, blocking ESXi's P2P path

Hi,

I’m trying to get PCIe based P2P working on a set of RTX Pro 6000 Blackwell Server Edition GPUs, since they don’t support NVlink. I’m using ESXi 8.x with DirectPath I/O passthrough.

In ESXi I configured my VM with the Broadcom recommended extra options:

  • pciPassthru.allowP2P TRUE
  • pciPassthru.RelaxACSforP2P TRUE

Unfortunately it seems like the required ATS feature is not enabled / not reported:

[root@esxi01:~] vsish -e get /hardware/pci/seg/0/bus/132/slot/0/func/0/isAtsEnabled
0

Is there anything I can do about this or is this completely unsupported? Will there be support at some point?

Details:

  • GPU: RTX PRO 6000 Blackwell Server Edition (GB202GL)
  • VBIOS: 98.02.67.00.0A
  • GSP Firmware: 590.48.01
  • Platform: ESXi 8.x with DirectPath I/O passthrough

Hi,

I’m not aware that this is possible at all. RTX Pro should only be used if the LLMs fit into GPU memory.

Thanks for the reply!

That is unfortunate. We know that RTX Pro 6000 Blackwell is not the optimal solution for tensor parallel inference, but we hoped to at least be able to improve throughput via PCIe P2P transfers.

Does it make sense to evaluate a baremetal installation without IOMMU? Some sources claim that the driver would implement P2P via. direct BAR access in this case.