I have a block device nvme0n1 on BlueField2 (via nvme-of), and I want to access it directly from the host.
In my current setup, I cannot see the SSD from the Host.
Are there any instructions on how to connect the devices and configure the driver? Thank you!
Additional information, I know that with the BlueField controller HCA, I can access the SSD on the host from BlueField2. But is it possible in reverse?
You have to use SNAP for that. That means you need to connect to the remote NVMe-oF target via SPDK, and then export the block device with NVMe emulation.
As for
Additional information, I know that with the BlueField controller HCA, I can access the SSD on the host from BlueField2. But is it possible in reverse?
This depends on what you mean by “access.” For some reason, I suspect you mean via PCIe, because the other access method is just with NVMe-oF, which you are already doing. If you do mean by PCIe, then the answer is no for both. The BF2 cannot access the host SSD via PCIe.
My previous response applies to the DPU HCA, whereby it is inserted in a PCIe slot where another CPU (x86/Arm) is the root complex, which is not the case when you have a BF controller card. You do not insert a BF controller card in a server. It will not work. I’m not sure that for such a device the moniker HCA should be used because it does not serve an external host and adapts it to a network channel. Such a device is more aptly called a “self-hosted” card, or SHC for short.
The Bluefield SHC bypasses the host in that it requires no external host at all. So yes, if you have the BF controller card, it is the root complex, and if the PCIe links are fanned out to SSD connectors populated with SSDs, the Arm host on the BF SoC can access the SSDs directly. However, note that there is also no external host that is connected to the BF card, using it as an HCA, and able to access the SSDs. So, the BF controller card bypasses the external host by not allowing it to be there in the first place.