Hi everyone,
I am quite new to CUDA programming and I am exploring the profiling tools. I am working with a Nvidia Tesla K20x.
When trying to compute the power consumption of a program, I noticed that the following command can give you a real time (refreshing every second) information about some stuff, including power consumption :
nvidia-smi -l 1
When lauching this command, the initial power consumption shown is 57W / 235W
Then after a few seconds, it decreases to 30W / 235W
When lauching my program in parallel, the power consumption increases to 58W / 235W
then when my program is over, it falls back to 18W / 235W
From this point, every time I launch my programs and end it, the power alternative between 58W and 18W.
I understand that when I launch the command, the GPU may be working, hence the fact that the initial power is high and then stabilizes. What I do not understand is, why does it first decrease only to 30W and then with the following tests, it decreases to 18W (and no longer to 30W).
Is there some kind of flushing on device that is done only after a program has finished running ?
I hope my question is clear enough and sorry for my possibly bad english, I am not a native speaker.
Thanks.