Please see the responses below.
- How to set “ulMaxNumDecodeSurfaces”, “ulNumDecodeSurfaces” and “ulNumOutputSurfaces” and what are their difference?
ulNumDecodeSurfaces is number of internally allocated decode surfaces. It can be set by the client application as per their memory and performance requirements for a typical workload and application design.
We also provide
min_num_decode_surfaces which is the absolute minimum required value for ulNumDecodeSurfaces to ensure correct decoding of any clip. This value is returned by bitstream parser in the CUVIDEOFORMAT struct in the callback functions.
‘ulNumOutputSurfaces’ is maximum number of internally allocated output surfaces which can be continuously mapped. This is required for pipelining of the decode and display/post-processing stages. This also needs to be configured as per the application design and the number of steps involved in the post-processing stage of the pipeline. For our SDK sample apps, we have configured this number to 2, as it suffices our functionality.
‘ulMaxNumDecodeSurfaces’ is needed during the initialization of the parser. It is the maximum number of decode surfaces. It can be set same as the ulNumDecodeSurfaces.
What is the difference between “coded_width” and image width? From the CUVIDEOFORMAT, I found that “coded_width” is different from my image width, for example if the image with is 1200, then “coded_width” is 1216.
‘coded_width’ is calculated as (image width in Mbs * 16).