I am using CUDA decoder to decode H264 packets streamed via TCP. The decoder itself works fine; but there seems to be built in latency. If N packets are received; then only M frames are created (where M < N). When a new packets comes in then M+1 frame is sent. The question is “Is there a way to flush decoder”.
There is discussion at http://neuron2.net/dgdecnv/cuda/cuda.html that talks about the same issue.
- You can flush the decoder by simply setting the EndOfStream flag on a dummy empty packet (set flag to CUVID_PKT_ENDOFSTREAM).
- For decoding immediately after seeking, one simple way is to deliver a dummy EndOfStream packet (to flush the decoder and reset the internal state).
Note that resetting the state also means that the decoder will not start decoding again until it gets a valid SPS&PPS NALUs, so if you want to seek on non-IDR frames that are not preceeded by a SPS/PPS, you may want to send a dummy SPS/PPS NALUs to the decoder (This would also apply to streams that do not contain the SPS/PPS as part of the elementary video stream, ie: MP4).
Has anybody been able to flush decoder to generate decoded picture immediately and keep decoder alive? Simply sending ENDOFSTREAM stops decoder. In my case, the client image and server image needs to remain in-sync. The server is generating H264 packets using NVIDIA GRID SDK.