Different result from deepstream when change the size of streamux

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) Nvidia T4
**• DeepStream Version 5.0
**• TensorRT Version 7.0
**• NVIDIA GPU Driver Version (valid for GPU only) 440
**• Issue Type( questions, new requirements, bugs) questions

I have a pipeline using yolov4 which included : Vehicle detector, Plate detection, Number detector
Vehicle detector(PIE)(416x416) → Plate detection(SGIE1)(320x320) → Number detector(SGIE2)(224x224)
I have the problem, when I set streamux with input size (720, 1080), I can not detect number in plate. But when I increse size of streamux ( 4000, 6000), the result is good.

With my knowledge, I think the frame buffer with be resize to input of GIE model (416x416) , after detect, vehicle image will be cropped from frame (original frame from video) and resized to input size of sgie1 model (320x320) and predict the location of plate.
After sgie1, the plate is cropped from original frame and resize to sgie2 inputsize ( 224x224) and predict location of number. Is this right ?

And I dont understand why I got good result when increase the size of streamux and get bad result when I use the size of streamux same as size of original video ? Please explain it for me
Thanks

Exactly.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

It’s weird, could you dump the input following DeepStream SDK FAQ - #9 by mchi