How to use cuda api to implement nvlink query

Dear colleagues:

Is it possible to implement

//4, mapping to nvmlDeviceGetNvLinkState in NVML API
static nvmlReturn_t (*nvmlInternalDeviceGetNvLinkState)(nvmlDevice_t device, unsigned int link, nvmlEnableState_t *isActive);
//5, mapping to nvmlDeviceGetNvLinkRemotePciInfo in NVML API
static nvmlReturn_t (*nvmlInternalDeviceGetNvLinkRemotePciInfo)(nvmlDevice_t device, unsigned int link, nvmlPciInfo_t *pci);
//6, mapping to nvmlDeviceGetNvLinkCapability in NVML API
static nvmlReturn_t (*nvmlInternalDeviceGetNvLinkCapability)(nvmlDevice_t device, unsigned int link,
nvmlNvLinkCapability_t capability, unsigned int *capResult);

?

I checked with cuda api samples and found

cudaDeviceCanAccessPeer to check if gpus have p2p access?

So can we use

nvmlDevice_t target = devices[link];
//Check for peer access between participating GPUs:
int can_access_peer_0_1;
int can_access_peer_1_0;
cudaDeviceCanAccessPeer(&can_access_peer_0_1, realDevice->gpuIndex, target->gpuIndex);
cudaDeviceCanAccessPeer(&can_access_peer_1_0, target->gpuIndex, realDevice->gpuIndex);

target device is queried by gpus[link]

As macOS lacked nvidia-ml.so, I have to forge a libnvidia-ml.dylib on system with all required functions to make nccl library available, which is the pre-requisite for TF, torch, JAX

Thanks in advance
Best regards
Orlando