About the resize method in nvvideoconvert/nvstreammux


  • GPU Tesla T4
  • DeepStream 5.0.1 container
  • CUDA 10.2
  • TensorRT 7.0.0 and 7.1.3
  • Driver 450.51.06
  • Question



Image resizing method in pipeline yields different result with opencv/PIL, making it difficult to align offline train/eval and online deployment. Is it possible to reveal some detail of resizing of DeepStream implementation?


For some time we had deployed a detector engine with deepstream pipeline. When dealing with the (mis)detection results, it is noticed that different image resize implementation could introduce noticeable classification error in our case. And we couldn’t get the detail of image resizing in nvvideoconvert or nvstreammux.

  • nvvideoconvert has interpolation-method options, we tried nearest and bilinear as others are seldom used in production.
  • nvstreammux has no such options, but it seems like nearest (which is dGPU default)

Some experiment suggests the resizing operation produces different result with other common libs, e.g. opencv or PIL, which are used in offline train/eval. We do know the fact that opencv and PIL themselves don’t align with each other. But without further knowledge we could not choose or implement proper resizing library to unify the preprocessing in all environments.

By digging into nvinfer we found the image buffer transformation are done by NvBufSurfTransform with default interpolation method (nearest). It also seems the interpolation-method directly maps to NvBufSurfTransformInter_XXX options. However the color conversion (NV12 to BGRA in our case) and resizing are done in the same function call to NvBufSurfTransform. Are they implemented separately or separable?

libnvbufsurftransform.so depends on libnppig.so which provides resizing functions, but not libnppicc.so which provides NV12ToBGR conversions etc. Is the resizing done by NPP? Is there corresponding color conversion function in NPP?

Specifically we are interested in aligning the following options:

  • cv::INTER_NEAREST or PIL.Image.NEAREST, with NvBufSurfTransformInter_Nearest
  • cv::INTER_LINEAR or PIL.Image.BILINEAR, with NvBufSurfTransformInter_Bilinear

And if possible, may nvstreammux have interpolation-method option added in future, since it could perform image resize/resample when batching.


Current nvstreammux is using “NvBufSurfTransformInter_Bilinear” method for scaling.

We will review your request and investigate it.

After confirmed with the development team, the interpolation-method option with nvstreammux will be available with the next version deepstream sdk. We will announce the new version when it is available.

Besides, if the input sources are in the same resolution, you can set nvstreammux “width” and “height” to the same as original video resolution, and the nvvideoconvert can be used after nvstreammux for scaling.

1 Like

Hi @Fiona.Chen, thanks for the good message.

Yes, in current pipeline we set nvvideoconvert before nvstreammux to rescale individual buffers which should work as configurable.

However it seems a loss of performance if we further divide the NV12 to BGRA conversion and resizing into two separate nvvideoconverts then play with the resizing alone.

After some experiment it seems with proper setup, nppiResizeSqrPixel series could almost align result with opencv. So if you could confirm that function found in libnvbufsurftransform.so is used as the resizing step, the problem will be mostly solved.

Edit: nppiResizeSqrPixel details
for resizing 1920x1080 grayscale buffer into 960x544
use xFactor = 960.0 / 1920.0, yFactor = 544.0 / 1080.0

  • xShift = yShift = 0.5, interpolation = NPPI_INTER_NN <==> cv::INTER_NEAREST, identical result
  • xShift = yShift = 0.0, interpolation = NPPI_INTER_LINEAR <==> cv::inter_LINEAR, with some pixels value +/- 1

So you are talking about opencv CUDA version?

No, just cpu version. CUDA version is not tested.

We use cudaTexture for bilinear and nearest interpolations, the other interpolations use NPP.

Thanks very much. We’ll try about cudaTexture.