• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 6.2 • JetPack Version (valid for Jetson only) • TensorRT Version 8.5.2-1+cuda11.8 • NVIDIA GPU Driver Version (valid for GPU only) 525.85.12 • Issue Type( questions, new requirements, bugs) questions
After running object detection (PGIE), I want to customize the size of input patch(for example, cropping), as shown in the image below, and then feed this bounding box data into a classifier (SGIE).
I want to know if there are any other adjustments possible besides adjusting the input using ‘maintain-aspect-ratio’ and ‘symmetric-padding’ properties.
Is it possible to customize the input data for the SGIE without altering the final output bounding box size by adjusting the Gst-nvdspreprocess properties?
Gst-nvdspreprocess can be used with PGIE or SGIE. How many Gst-nvdspreprocess will you use in your pipeline? Which Gst-nvdspreprocess is for PGIE and which is for SGIE? Which Gst-nvdspreprocess do you want to customize?
My pipeline consists of the following elements:
streammux → pgie (detection) → sgie (classification) → osd → sink.
I want to preprocess(or configurate) the SGIE input, which is generated after pgie.
(As I know, DeepStream system resizes(upsample or down sample) bounding box region
to predefined input size of SGIE)
I aim to expand the region of the PGIE output manually and feed it into the SGIE input.
(Maintain of scale is important in my SGIE task,
because my SGIE task is similar with classification between ‘big’ car and ‘small’ car.
In this case, resizing from bounding box for patch composition is related to scale transform and size information will be disappeared)
I’m experiencing some degradation in classification performance due to resizing the input size.
Hence, my intention is to expand and crop the region of PGIE outputs as static size, rather than resizing it.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks