YOlov4-tiny with batch size 64 works , but batch size 1 gives wrong bounding boxes

Description

Hello,
Anyone have any idea about Yolov4 tiny model with batch size 1.

I refered this Yolov4 repo Here to generate onnx file. By default, I had batch size 64 in my cfg. It took a while to build the engine. And then inference is also as expected but it was very slow.

Then I realized I should give batch size 1 in my cfg file.
I changed batch size to 1 in my yolov4-tiny.cfg and Onnx model is generated, Used trtexec to build an engine.
used this command:
sudo ./trtexec --onnx=yolov4-tiny.onnx --saveEngine=yolov4-tiny.rt --fp16 --verbose

But when I run inference on some test data, I get correct classes, probs but boxes are not correct.
They are way too small and placed at the top of the image everytime.

What could be the reason for this ?
Should I change something here ->(in the output calculation) postprocess when I change the batch size to 1.

Any help/ hint would be appreciated.

Thank you

Environment

TensorRT Version: 8.0.1

Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation

Also, request you to share your model and script if not shared already so that we can help you better.

Meanwhile, for some common errors and queries please refer to below link:
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/#error-messaging
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/#faq

Thanks!