Nvinfer input Image padding and scaling

• Hardware Platform (Jetson / GPU) Gtx 2060 super
• DeepStream Version 5
• TensorRT Version 7
• NVIDIA GPU Driver Version (valid for GPU only) cuda 10.2

Hi there!
I am using deepstream with a Primary detector which requires 640*640 padded input image, our camera captures images with the size of 1280*720. How can we do image padding and scaling the desired size of the model? Should we do that in nvinfer or streammux?

Please help!

The stream muxer has a padding property that maintains aspect ratio. Just set “enable-padding” to 1 (or True or TRUE or true depending on the language).

1 Like

Yeah, like Mdegans said, you can do the scaling and padding in streammux. you can set the resolution your model needed in streammux property in config, also enable the property Mdegans mentioned.
you also can let nvinfer do the scaling and padding.
please check nvinfer document,

1, For the streammux plugin, I see.
2, For the nvinfer, I do not see where is the config for image scaling or padding, maybe it performs those operations automatically without any config? Expect more explanation!

Thank you! have a nice day!

For nvinfer, buffer conversion done here,
sources/gst-plugins/gst-nvinfer/gstnvinfer.cpp::gst_nvinfer_process_full_frame or gst_nvinfer_process_objects which depends on process_full_frame true or false.