GPU MHZ Utilization

Greetings,

I am developing a monitoring agent for GPU cards that is capable of providing real-time telemetry using CUDA and NVML libraries.

I’m a GPU programming newbie and I want to understand a little more about GPU core operation vs how Intel/AMD CPU cores work.

One formula that can be used for CPUs is (cpumhz or Workload average peak CPU utilization (MHz)) as follows:

((CPUSPEED * CORES) /100) * CPULOAD = Workload average peak CPU utilization

More details are here
https://vikernel.wordpress.com/tag/vmware-formulas/

So would it be correct that the same can be applied to GPUs using the same thing. The exception would be CUDA cores/shaders in place of “CORES” or could I just multiple the current clock speed by the actual gpu clock usage being that a GPU has a core clock for its 1000s of cores/shaders.

Thanks