I am working with the Decoder SDK. (NVDEC). It mentions in the (somewhat sparse) documentation that users would normally create there own Parser and Source which is what I want to do as I am trying to create a very streamlined and compact H264 player for use in both Live and Playback in an Video Surveillance app.
I think I understand how to create the decoder and how to directly deliver the compressed frames to DecodePicture(). However it is not at all clear how and when to obtain the decoded NV12 frames. In the sample code, the parser ‘magically’ does a callback with an ‘index’ that gets passed to MapVideoFrame() but its not at all clear how he knows to do that.
To be a bit more specific:
How do I know when/if its OK to deliver more frames to DecodePicture? Does it block if it gets too busy or if I fail to pull decoded frames out of it?
The decoder is typically going to buffer up a few frames before the first displayable frame becomes available. How do I know when a decoded frame is available and where to find it? I can see where the callback gets passed to the parser (the one I want to replace) but cant see where in that basic 5 method decoder API there is any sort of callback or trigger. What am I missing?