Bluefield-2 accessing GPU in the same server

Hi there,
It might be a silly question but I have been reading a lot about solutions like GPUDirect X, which makes it easier and obviously faster for GPUs to access remote GPUs’ memory/storage, through the Bluefield-2’s features.
I am wondering how it (at least theoretically) works.

My theory: In general, the GPU does not know whether there is a Bluefield and vice versa. I cannot run lspci on the Bluefield, which would list out all other cards plugged into the PCI slots. So, on the host, necessary drivers and software layers are installed, and when the GPU would call normally the CPU to do something, since the CPU is aware of the existence of the Bluefield, it can communicate to it, and offload the processing to be direct.
My main question is whether the Bluefield can also access the GPU directly (after talking with the CPU first)? For instance, can I run an app on the Bluefield for which I want to utilize the GPUs resources for computation-heavy tasks? If can, is there any discussion/write-ups about it?

I tried installing CUDA and nvidia-drivers/utils on the SmartNIC to check whether nvidia-smi app would be able to find the GPU. The Nvidia driver installation even tainted the kernel and needed to enroll new keys into EFI. Nonetheless, after doing so, nvidia-smi still does not see anything.
I guess this was anyway not the most appropriate way to try, but at least I closing the gaps :)

FYI.