I’ve tried but failed to produce scalable, smooth , video decode and display, after much investigation and parameter changing I’d be grateful of any advice.
Using two of P5000 quadro cards, I’ve created a 3x2 4k video wall running at 30Hz.
When displaying 6 decodes, I see a stutter in the video once every couple of seconds, see link below:
(top left stream is probably best to show error I’m getting)
I’ve based the code on the sample application (which has no post decode play out buffering at all) and subsequently stutter is seen (even on a single decode on a 1080p monitor), I understand that this is a sample and have therefore introduced a variable amount of decoded mapped surfaces prior to rendering to mask any natural sporadic output by the decoder.
I believe the parameters that I’m interested in are part of the CUVIDDECODECREATEINFO structure, in particular ulNumOutputSurfaces, however, I’m still struggling when I start to decode 6 streams as seen in the video above.
My understanding is that the ulNumDecodeSurfaces variable is used as the total surface memory allocation on the card and that ulNumOutputSurfaces informs the driver how many surfaces I will keep mapped in after decode (making up my play out fifo prior to render).
Can you share any pointers on how to achieve the smooth playout?
I may be missing something, I’m measuring how long tasks take to prove that I am calling dx9 drawscene at regular 33ms intervals.
It looks as if I’m out of sync with what buffer is being used on the card, I try setting:
ulNumOutputSurfaces = 20 or 30
ulNumDecodeSurfaces= 4 or 10 or 20
I keep a playout fifo of 3 frames, and still see the stutter.
I’ve also tried splitting the function ‘renderVideoFrame’ into two parts, 1. map on decode (fifo buffer) then 2. drawScene code
Any pointers to try or documentation would be appreciated.