Please refer to section 4.2 in NVDEC_VideoDecoder_API_ProgGuide.pdf which explains about the definition of ulNumDecodeSurfaces. ulNumDecodeSurfaces should be greater than or equal to the DPB size used in the bitstream. If you are specifying a value less than this, you could encounter corruption. It is recommended that you allocate some more surfaces than the DPB size for better pipelining.
This is what is done in our sample application: please refer to Samples\NvCodec\NvDecoder\NvDecoder.cpp and look for function GetNumDecodeSurfaces(…). For H264 we have hardcoded the value to maximum possible number of reference buffers + 4 which comes out to be 20. We are allocating extra surfaces for better pipelining.
I’d like to ask if this parameter affect the latency of decoding, I would say it’s ignorable right ? GPU could work on decoding which might be much faster than mapping and displaying, so it’s best to create more decoding buffers from ulNumDecodeSurfaces in order to let GPU work as much as possible.
Sorry that I just want to reduce latency as much as possible, thank you very much !