Failed in deploying tlt trained model in deepstream-app

• Hardware Platform (Jetson / GPU):Xaiver
• DeepStream Version:5.0
• JetPack Version (valid for Jetson only):4.4
• TensorRT Version7.1
• NVIDIA GPU Driver Version (valid for GPU only) not sure

I trained an SSD model on Titan XP with the Transfer Learning Toolkit and export it to FP16 of .etlt. Then I migrate the .etlt to Xavier and use tlt-convert to have it in a trt-engine.

Then I ran the deepstream-app as shown in the sample folder
deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/deepstream_app_source1_miotcd.txt.

On the result display, there is only one bounding box flashing in a single frame of the video. My input size of my TLT trained SSD model is 512 x 512 and the input video is in size 720 x 480. I have also attached the two config files here. I wonder if there is anything wrong with the config.

config_infer_primary_miotcd.txt (1.2 KB) deepstream_app_source1_miotcd.txt (2.4 KB)

Thanks.

Since your model is trained with 512x512, please change below in your config file and retry.

input-dims=3;480;720;0

to

input-dims=3;512;512;0