• Hardware Platform (Jetson / GPU) Gtx 2060 super • DeepStream Version 5 • TensorRT Version 7 • NVIDIA GPU Driver Version (valid for GPU only) cuda 10.2
Hi there!
I am using deepstream with a Primary detector which requires 640*640 padded input image, our camera captures images with the size of 1280*720. How can we do image padding and scaling the desired size of the model? Should we do that in nvinfer or streammux?
The stream muxer has a padding property that maintains aspect ratio. Just set “enable-padding” to 1 (or True or TRUE or true depending on the language).
1, For the streammux plugin, I see.
2, For the nvinfer, I do not see where is the config for image scaling or padding, maybe it performs those operations automatically without any config? Expect more explanation!
For nvinfer, buffer conversion done here,
sources/gst-plugins/gst-nvinfer/gstnvinfer.cpp::gst_nvinfer_process_full_frame or gst_nvinfer_process_objects which depends on process_full_frame true or false.