Deepstream YOLO pre-processing issues

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU):GPU
• DeepStream Version:deepstream-6.2

May I confirm if there is a resize operation in the preprocessing part of the YOLO model? My pipeline is nvurlsrcbin–nvstreammux–nvinfer–nvsink(fakesink), I set the width and height of streammux to 640x640,the output detection box in the probe callback function is limited to 640x640; If I set the width and height of streammux to 1920x1080, the detection box will be limited to 1920x1080.

Therefore, what I am most concerned about is whether the width and height of streammux are the input sizes for model detection?

No, nvstreammux is used to batch multiple streams, so nvstreammux will scale multiple streams to a single resolution.

Meanwhile, When the trt engine is generated the model input size will be fixed. refer to this call chain.

NvDsInferContextImpl::initializegenerateBackendContextbuildModelbuildNetwork

During runtime, the InferPreprocessor::transform function converts the image into a tensor suitable for the model’s input, based on the model’s input dimensions.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.