Cyclic framerate drops

Hi,

My application is showing cyclic framerate drop for no reason( rendering the same point of view, simple static scene).
I decided to use nsight to see what’s happening but unfortunately, when nsight graphic debugger is attached, the framerate becomes stable.

Here are some (tiny) screenshots of the frame times over time.

My framerate over time when running in standalone, mean value is 16 fps
http://hpics.li/bb97329
And when running with nsight, mean value is 48fps and framerate is stable.
http://hpics.li/5c4cd9a

Moreover, I’ve seen that CPU usage is twice as high when launching in stand-alone than with nsight attached.

I’ve seen there is a similar topic here:
https://devtalk.nvidia.com/default/topic/1011032/cuda-programming-and-performance/performance-is-better-about-10-when-using-nsight-visual-studio-2015-profiler-than-when-executing-the-exe/
And a few other topics on the forum… I’m trying to find the source of issue for months.
I’ve tried disabling “threaded optimization” option without success.

CPU is a core I7 6700.
GPU is Geforce 560ti and drivers are 385.41
I also checked with GPU-Z and gpu is running at the same clock in both cases.
The renderer uses OpenGl 4

Thanks,
Jean-Baptiste.

HI Crashy,

It’s hard to tell the root cause without a specific example.

Does it reproduce with all other OpenGL applications on your system?
If not, can you provide an example application so we can take a look?

You also mentioned CPU usage is abnormally high. Can you specify which dll is taking the CPU? You can check CPU usage by using Microsoft Process Explorer here:

It will also tell you where the dll comes from. For example, Nvidia OpenGL drivers are nvoglv32.dll and nvoglv64.dll

Hi,

Thank you for your answer.
I’ve finally found the cause, and forget to write the solution here.
The problem was that I had a few very large textures that I used to generate smaller SDF during loading, but never released them. As a result the GPU memory was saturated.