Bug? Driver issue Linux - GPU1 dropping to states other than power state P0

Hey,

I am running some tests regarding various iterative solvers’ performance using cuSPARSE and AMGX. For some reason, despite of (1) having persistent mode on and (2) preferable maximum perfomance on NVIDIA X Server Settings, nvidia-smi shows GPU1 (GPU0 is used to render Xorg enviroment) dropping to power state P2 while running the benchmarks (while GPU1 load is nearly 100%), thus skewing my benchmark results. It gives variable solving times between reboot (more than 50% difference).

Before I did (1) and (2) I used to see GPU1 falling back to P8 while running the benchmarks. What am I missing here? Is there a way through nvidia-smi to lock GPU1 to P0?

Which GPU type specifically/exactly is GPU1? Is it in the GeForce family?

This topic has likely been covered elsewhere on this forum including here:

[url]https://devtalk.nvidia.com/default/topic/892842/cuda-programming-and-performance/one-weird-trick-to-get-a-maxwell-v2-gpu-to-reach-its-max-memory-clock-/[/url]

[url]https://devtalk.nvidia.com/default/topic/895501/nvidia-downclocks-my-card-when-running-opencl/[/url]

and elsewhere including here:

[url]cuda - nvidia-smi GPU performance measure does not make sense - Stack Overflow

Thanks for the references. I’ll check them out. FYI both GPUs are 970GTX.