Help needed to convert yolov4-tiny model to tensorRT engine (DS 5)

hardware : x64, rtx 2060
cuda 10.2
deepstream 5.0.1
TRT: 7.0.0.11
driver: 450.102.04

Hello, I am using GitHub - Tianxiaomo/pytorch-YOLOv4: PyTorch ,ONNX and TensorRT implementation of YOLOv4
to make an engine file from cfg/weights

The problem is - the engine is producing nonsensical inference results (zero or infinite-sized bboxes, all confidences are 1)
A very similar-looking issue was described here:

And the this is seemingly happening even before I get ONNX model -
the demo.py in pytorch-YOLOv4 crashes because inference result has infinite-sized bboxes.

I have verified my model/cfg files with both darknet and a simple python opencv-dnn based script.
In both cases I get correct result (for the same image that I used in the demo.py script)

caveat: my opencv-dnn python script was also failing with the same infinity issue, and I fixed it by accident by trying a different opencv version (currently - 3.4.8.29)
I was hoping this would also fix the issue with pytorch-YOLOv4, but no.

Apparently, there’s some weird incompatibility going on and I’ve tried different pytorch versions (1.3, 1.4, 1.5, 1.7.1)

I would appreciate any ideas about what I can try.
thank you.

Hi,

It’s also recommended to check this issue with the GitHub author:
https://github.com/Tianxiaomo/pytorch-YOLOv4

Do you run the inference with a custom YOLOv4 model?
If yes, could you check if there is any configuration need to be updated first?

For example:
https://github.com/Tianxiaomo/pytorch-YOLOv4/blob/master/models.py#L333

Thanks.