NVML function: Insufficient permission

I want to get NVML gpu metrics on datadog. I am using GKE version 1.23.8-gke.1900.

I have deployed datadog agent(v7.39.0) using the daemonset approach as per the datadog agent documentation. When I run agent status command I am able to get only 6 nvml metrics (i.e: nvml.power_usage, nvml.total_energy_consumption, nvml.pcie_tx_throughput, nvml.pcie_rx_throughput, nvml.temperature, nvml.device_count) out of 14 nvml metrics that are available.

We have created a GKE private cluster with A100 multi-instance gpu, we have deployed pods on this cluster with Cuda(v11.4) image integration. We tested the integration with nvidia-smi command, but it seems all metrics are not captured for nvidia and we are getting Insufficient permissions issue for some metrics, please refer to the images below.

Output for the nvidia-smi command :

Output for the nvidia-smi -a command :
FB Memory Usage
Total : Insufficient Permissions
Used : Insufficient Permissions
Free : Insufficient Permissions
BAR1 Memory Usage
Total : Insufficient Permissions
Used : Insufficient Permissions
Free : Insufficient Permissions
Compute Mode : Default
Utilization
Gpu : N/A
Memory : N/A
Encoder : N/A
Decoder : N/A

Output for the nvidia-smi dmon command :

gpu pwr gtemp mtemp sm mem enc dec mclk pclk

Idx W C C % % % % MHz MHz

0    42    31    31     -     -     -     -  1215   210
0    42    31    31     -     -     -     -  1215   210
0    42    31    31     -     -     -     -  1215   210
0    42    31    31     -     -     -     -  1215   210
0    42    31    31     -     -     -     -  1215   210
0    42    31    31     -     -     -     -  1215   210

It is showing Insufficient permission for Memory usage & N/A for Gpu utilization.

We want to get these 2 metrics as these are really important to us as we want to use it further for monitoring.

Please guide us on how we can resolve this issue.