Hello everyone,
I need some ressources that illustrates the sleep mode that the GPU enter when no work is given, because I can see this with nvidia-smi but no documentation about this.
For example, for my tesla M2075, it passes from 75W to 28W (diff=47W). This GPU (fermi) is a bit old but in the fermi whitepaper I don’t find any information about this .
I can estimate that there is the impact of launching the nvidia-smi! Even though, the idle of this GPU is ~= 56W so (diff=18W).
Thank you,
Dorra
Thank you forthe link. It was very helpful to understand th device intialization and CUDA context cost.
Still, I have some questions:
Q1: When we do the cudaFree, why we don’t see a decrease in the power consumption ? Should we release the primary cuda context to see this drop in power consumption ?
Q2: As it was mentionned in the post, the GPU manages pstates: so it changes dynamically. Is this true for all GPU tesla (from fermi to volta) ??
Q3: Does nvidia provide an API to keep track of Perf/power states ?? or there is only nvidia-smi ??
Thank you in advance,
Dorra
Why would you expect to see a reduction in power consumption when you do a cudaFree?
If you do a free() operation in CPU code, does that mean the CPU is consuming less power?
Q2: Yes
Q3: Power states are not directly visible using the CUDA runtime API. You can use the NVML library (what nvidia-smi uses) and it may be possible to get some visibility into GPU power consumption using the profilers, and/or CUPTI.