May I confirm if there is a resize operation in the preprocessing part of the YOLO model? My pipeline is nvurlsrcbin–nvstreammux–nvinfer–nvsink(fakesink), I set the width and height of streammux to 640x640,the output detection box in the probe callback function is limited to 640x640; If I set the width and height of streammux to 1920x1080, the detection box will be limited to 1920x1080.
Therefore, what I am most concerned about is whether the width and height of streammux are the input sizes for model detection?
During runtime, the InferPreprocessor::transform function converts the image into a tensor suitable for the model’s input, based on the model’s input dimensions.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks.