How to monitor the GPU hardware resources usage state at any time?

Dear all,
While running applications on a Tesla V100 NVIDIA GPU, I need to know the percentage of used GPU at any time (working SMs, cores).

I have searched it and the only thing I have found is the GPU Volatile Utilization provided by nvidia-smi. Eventhough, GPU Volatile Utilization gives me the percentage of the last second that the gpu was working.
I need something totally different. The percentage of working hardware resources of the GPU (at least SMs).

Is there any way to achieve it?

NVVP provides multiple graphs of utilization when you click “Examine Individual Kernels”, including the SMs.

Hi saulocpp!

Thank you very much for your reply.
You are right. I forgot to clarify that I am working with Opencl as well and NVVP is not available for opencl profiling anymore.

Eventhough I found the ‘nvidia-smi dmon’ that gives me real time SMs percentage utilization, but:
a) it only gives me percentage of SMs (not more info like cores, registers, etc…)
b) I do not know how reliable it is for monitoring opencl info (it outputs the desired results but I do not know if I can trust them).

Do you know if nvidia-smi works fine with Opencl as well.
Can you propose me any other way to achieve what I want in with more details and more reliable results?

Thanks

I have an update:
I just realized that the command ‘nvidia-smi dmon -s u’ gives me percentage of time for each sample (1 sec or 1/6 of sec, depends on the device).

So, still I cannot take the percentage of number of SMs that are working in OpenCL.

How can I achieve it?