Darknet YoloV4-tiny model in TensorRT 8 inference

Description

I have darknet yolov4-tiny model trained on 5 objects. I was using tensorrt 6 and tkdnn repo to run inference. Everything was perfect.

Now, I want to use tensorrt 8 and run the inference.
I am trying to convert darknet yolov4-tiny model to onnx and to TensorRT 8.
But unfortunately, I am not able to do it properly.

Could you please let me know How I can perform inference on it ??

Environment

TensorRT Version: 8.0.1

Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation

Also, request you to share your model and script if not shared already so that we can help you better.

Meanwhile, for some common errors and queries please refer to below link:
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/#error-messaging
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/#faq

Thanks!

Hi,

Please make sure your inference script is correct. You can refer samples here.

If you still face this issue, could you please share issue repro script/input data and complete error logs for better debugging.

Thank you.

Could you please confirm if you’re using Deepstream ?

(post deleted by author)

(post deleted by author)

Hi,

Sorry for the delayed response, we have gone through the code, looks normal.
Could you please share actual error logs you’re facing and if possible minimal issue repro for better debugging.

Thank you.

@spolisetty @NVES I tried this repo to convert to onnx model.
It takes by default 64 batchsize:
So, it take 1.5 - 2 hours to built the engine and inference time is almost 0.7 fps (way too slow than expected.)
Normally it should take batch size 1. but it is not working properly.

I tried to change the batch size to 1 in the config file and generated onnx model.
Then used trtexec to create an engine.

Then I used my script given above to do inference.

But I get smaller boxes and too many boxes (labels and probability is correct but only the box dimensions are not correct). Boxes are not around the object exactly instead they are small and tiny somewhere on the input image.

Is there something wrong the with calculation part ?? or engine creation part ?? or onnx conversion with batch size 1.

Could you provide me the command to be used to create engine with trtexec and required onnx conversion script with batch size 1.

Hi,

As batch size is static input, we can run trtexec command normally to build the engine.
trtexec --onnx=/path/to/.onnx --saveEngine=engine.trt

Also could you please confirm are you getting correct bounding boxes result with onnx-runtime, to make sure onnx file is generated correctly.

Also please make sure your post processing is working fine (as we could get labels and probability is correct).

You can refer the same tensorrt sample shared in previous replies.

Thank you.

solved it. There was a miss interpretation of the network.

1 Like