Hi - I’m seeing very poor behavior in my application on Windows 10 using NVDEC, but only in fullscreen. My usage pattern is similar to the DecodeGL sample, the primary difference being that I have a secondary thread which owns the CUDA context and a matched secondary GL context which shares textures with the primary. The other difference is I’ve moved off of the deprecated interop APIs and am using cuGraphicsMapResources() and its brethren.
My application performs perfectly in windowed mode, and also in fullscreen mode with vsync off. However in fullscreen with vsync on I am seeing very significant stuttering. In particular, the glFinish() call I’m using to synchronize the secondary thread is stalling for as long as 10secs, followed by normal behavior for a couple seconds, followed again by significant stalling. This call follows the glTexSubImage2D() call which updates a texture with the latest NV12->RGBA PBO per the DecodeGL sample’s pattern.
An additional clue is that if I put a glFinish() call on the display (primary) thread, everything behaves fairly well, though the performance implications of that call make me want to avoid it. Yet another clue is that if i eliminate any interaction with OpenGL entirely (which of course prevents me from doing anything useful with the NVDEC output) I see similar stalling at the call to cuvidDecodePicture() instead, which can take similarly long (10+ secs).
My intuition is that there is some sort of strange interaction between my two GL contexts and the CUDA context all needing to synchronize with one another. I know that the GL driver seems to engage some sort of different mode when fullscreen is enabled, and my hunch is that that mode robs the driver of a key opportunity to synchronize these contexts (which adding the glFinish() to the primary thread seems to restore). I’ve experimented with inserting cudaThreadSynchronize() at a couple of different points but no luck. Any hints would be greatly appreciated as I’ve already spent a fruitless week trying to track this down.