Exploring GPUDirect Storage and Offload Capabilities in BlueField2/BlueField3

Dear GDS and BlueField Team,

Upon reviewing the BlueField2 and BlueField3 datasheets, I noticed that GPUDirect Storage (GDS) is listed as a feature within the “HPC/AI Accelerations” category. It’s clear that both BlueField 2 and 3 incorporate a ConnectX NIC, which can be utilized for RDMA, forming the foundation for NVMe over Fabrics (NVMeOF) implementations. With NVMeOF support, GPUs can seamlessly utilize GDS to directly read data from remote NVMe storage.

However, I’m curious if there are any specific offload capabilities available with BlueField as outlined in the datasheet. Could you please provide any relevant links or documentation pertaining to these features?

Thank you for your attention to this inquiry.

All info is here,

Hi Xiaofengl,

Thank you for getting back to me.

I’ve reviewed the GDS documentation, but I could only find one mention of Bluefield, which is in a small section here: NVIDIA GPUDirect Storage Design Guide - NVIDIA Docs

Unfortunately, this section didn’t provide any helpful information. I’m wondering if Bluefield is simply considered as a ConnectX card in the GDS stack. Could you please clarify how the “HPC/AI Accelerations” are achieved with Bluefield in the GDS stack? Specifically, I’m interested in understanding if any tasks are offloaded to Bluefield.

Thank you for your attention to this inquiry.