Optimizing Video Memory Usage with the NVDECODE API and NVIDIA Video Codec SDK

Originally published at: Optimizing Video Memory Usage with the NVDECODE API and NVIDIA Video Codec SDK | NVIDIA Technical Blog

The NVIDIA Video Codec SDK consists of GPU hardware-accelerated APIs for the following tasks: Video encoding, with the NVENCODE APIVideo decoding, with NVDECODE API (formerly known as nvcuvid) While writing an application using the NVDECODE or NVENCODE APIs, it is crucial to use video memory in an efficient way. If an application uses multiple decoders…

Hi there.

Figure 2 (A typical decoding pipeline) shows video parser as optional component. The article also says: “These components are not dependent on each other and hence can be used independently.”
But to calculate the ulNumDecodeSurfaces the article suggests to use CUVIDEOFORMAT::min_num_decode_surfaces provided by the sequence callback from the parser.
Does it mean that usage of the video parser is actually required, not optional?
If parser is not required, how do I determine a value of the ulNumDecodeSurfaces without video parser call back and CUVIDEOFORMAT::min_num_decode_surfaces ?
Thank you!

Does it mean that usage of the video parser is actually required, not optional?

Hi @petr.mpp,

Let me clarify it. Developers has freedom to implement their own parser and use. Parser is MUST for decoder pipeline but NVIDIA video parser can be replaced by their own implementation.

ulNumDecodeSurfaces can be derived from bitstream DPB info from either their own parser or Nvidia video parser. Or can be set to MAX DPB size defined in codec spec.

Thank you.
Vikas

1 Like