Is it good practice to use `maintain-aspect-ratio=1`

If the input sources are not exactly the same aspect ratio as the input dimension to the model should we be setting: maintain-aspect-ratio=1 for best detector performance?

ie Using a people net model the input dims are: input-dims=3;544;960;0 and my rasp camera sources (and stream mux) are set to 3072x1728. Both of these have a roughly 1.7x aspect ratio so the value.

But if the camera resolution was 3072x2048 thats a ratio of around 1.5 so I assume it would be best to set maintain-aspect-ratio=1 here otherwise the detector would be seeing squashed images.

Is my thinking here correct?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Nano and NX
• DeepStream Version 5
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) QUESTION
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi @jasonpgf2a ,
For better accuracy, it’s better to keep the same pre-processing method between inference and training, that is, if, in training, you use " maintain-aspect-ratio" and add some padding, you should do the same in inference.

Thanks!

What if I don’t know how the model wa trained @mchi - for example with the nvidia provided models like peoplenet, resnet10 and so on…

For NV model, we should also provide the GIE config file, you could keep the config as NV release.
For other models as you said that you don’t know how it was trained, I think user have to try both and check their output accuracy. BTW, if don’t know how the model was trianed, e.g. the mean value, offset value, it’s hard to get the same inference accuracy as they are trained.

YOLOv2 performance Issue based on streammux output size is an exmaple about maintain-aspect-ratio=1.

I’ve tried both settings on the nvidia provided models and actually don’t see any detection performance change so will leave the default settings as provided in the sample config files. Thanks.

I think it will not be so obvious for some video, for example, the object scaled with or without " maintain-aspect-ratio=1 " will be very close.