CSPDarkent 53 tensorrt model is not working correcly

Dear NVIDIA team, I have trained YOLOV4 with custom dataset with 3 different backbone networks as below.

  1. Resnet18 (Training resolution: 1248*384)
  2. CSPDarknet 53 (Training resolution: 640*480)
  3. Mobilenet_v1(Training resolution: 640*480)

I exported all the above models to so etlt and converted to tensorrt engine with tao-converter , Resnet-18 and mobilenet_v1 works perfect without any issues but CSPdarkent 53 predictions are so wierd could you please help me to solve this problem. I have use the below command to convert to tensorrt engine.

tao-converter -k ‘nvidia_tlt’ -p Input,1x3x384x1248,1x3x384x1248,1x3x384x1248 -e /workspace/experiments/model.engine-t fp16 /workspace/experiments/model.etlt for resnet18.

I changed input size to Input,1x3x480x640,1x3x480x640,1x3x480x640 for mobilenet and CSPDarkent 53

Thanks in advance.

What is the exact problem here, is there any bbox?
More, can you use tao inference to check if the .etlt model works as expected firstly?

Thanks for your reply, the exact problem is bounding boxes are not correct, there are bounding boxes but out of the object and small, Tlt model inference works normally only tensorrt model is the problem.

To narrow down, could you please convert to fp32 tensorrt engine to check if it can improve?

More, could you share the link where you download tao-converter?

Yes , I have tried with FP32 also still there is no any improvement, I just cant understand why only CSPDarkent 53 tensorrt model is not working.

Can you use the converter inside the tao docker and run test inside the tao docker again?

docker run --runtime=nvidia -it --rm -e NVIDIA_VISIBLE_DEVICES=0 nvcr.io/nvidia/tao/tao-toolkit:4.0.1-tf1.15.5

Then inside docker, run
converter . See `converter -h" for more.

Dear morganh, I used the tao docker for training and conversion . I did everything in tao docker environment only.

So, please use converter instead of tao-converter and check again. Thanks.

I didn’t understand what you said what is the difference between converter and tao-conveter.

There is a default binary named “converter” inside the docker.
You can use it directly.

Ok thank you I will try this and update to you.

I have used converter to generate the tensorrt model still the bounding boxes are same.

How did you test with the tensorrt engine?

I am using python script for running tensorrt model , I referred from this post Doing inference in python with YOLO V4 in TensorRT - postporsessing

To narrow down, could you use official github to run inference as well?
https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/yolov4_tao/pgie_yolov4_tao_config.txt

Git clone above github, then config the engine in deepstream_tao_apps/pgie_yolov4_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub.
Or firstly, you can also comment out the deepstream_tao_apps/pgie_yolov4_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub and set the .etlt and its key in deepstream_tao_apps/pgie_yolov4_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub to let deepstream generate the tensorrt engine.

I haven’t used deepstream until now does this related to tensorrt conversion?

Need to narrow down. That is the reason you can try above github to check.

More, you can also use tao inference to run inference against the tensorrt engine.

Refer to YOLOv4 - NVIDIA Docs

Please share the result with us.

Hi , morganh tensorrt inference with tao works well but with python script ist not working, here is my inference script can you please check it because its working for all other network tensorrt models except cspdarknet 53 model. Hope you can help me to sort out this thank you so much.
testing.py (5.5 KB)

Can you share a sample image how the inference is not correct?