Image looks abnormal after decoding.

Hi,

I used nvenc and nvdec to do encdoing and decoding, but the image looks abnormal. I have attached the image and I use Opencv imwrite to save this image.

I have two questions:

  1. How to set “ulMaxNumDecodeSurfaces”, “ulNumDecodeSurfaces” and “ulNumOutputSurfaces” and what are their difference?

  2. What is the difference between “coded_width” and image width? From the CUVIDEOFORMAT, I found that “coded_width” is different from my image width, for example if the image with is 1200, then “coded_width” is 1216.

Hope someone can help this!

Thanks!

Hi,

Can anyone from Nvidia take a look on this issue?

Thanks!

Hi.

Please see the responses below.

  1. How to set “ulMaxNumDecodeSurfaces”, “ulNumDecodeSurfaces” and “ulNumOutputSurfaces” and what are their difference?

ulNumDecodeSurfaces is number of internally allocated decode surfaces. It can be set by the client application as per their memory and performance requirements for a typical workload and application design.

We also provide min_num_decode_surfaces which is the absolute minimum required value for ulNumDecodeSurfaces to ensure correct decoding of any clip. This value is returned by bitstream parser in the CUVIDEOFORMAT struct in the callback functions.

‘ulNumOutputSurfaces’ is maximum number of internally allocated output surfaces which can be continuously mapped. This is required for pipelining of the decode and display/post-processing stages. This also needs to be configured as per the application design and the number of steps involved in the post-processing stage of the pipeline. For our SDK sample apps, we have configured this number to 2, as it suffices our functionality.

‘ulMaxNumDecodeSurfaces’ is needed during the initialization of the parser. It is the maximum number of decode surfaces. It can be set same as the ulNumDecodeSurfaces.

What is the difference between “coded_width” and image width? From the CUVIDEOFORMAT, I found that “coded_width” is different from my image width, for example if the image with is 1200, then “coded_width” is 1216.

‘coded_width’ is calculated as (image width in Mbs * 16).

Thanks.

Hi Mandar,

Is there any way to get min_num_decode_surfaces for an input stream?
I found ‘cuvidGetSourceVideoFormat’ which is only suitable for files, as far as I know.

Alternatively, is there a way to know during decoding, that ulNumDecodeSurfaces is to low?

Thanks,
Tamir