TLT YOLO v3 model cannot detect anything in Deepstream 5.0, JetPack 4.4

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only)

Problem description

I trained a yolov3 model with my dataset in nvidia transfer learning toolkit, exported and tlt-converted to trt.engine file with int8 calibration file for Jetson Xavier.

The model working well in TLT jupyter notbook environment, it can detect objects, but when I use deepstream-test3-app to depoly this model, nothing detected, the obj_metadata from pgie is empty.

What I’ve done

  1. Train a yolov3 via TLT.
  2. Export and convert to trt.engine with int8 calibration file.
  3. Build the TRT-OSS for Jetson, I just copied the .so file provided in the deepstream-tlt-app repo, it says the trt 7.1 version is a little different from ver 7.0, but I did’t build it from scratch since you’ve provide it.
  4. Build the deepstream-custom-app, copy the libnvds_infercustomparser_yolov3_tlt.so to my app folder (duplicated from deepstream-test3).
  5. Configure the pgie_config.txt, to parse my custom yolov3 trt.engine and other terms.
  6. Build my app and running.
  7. Nothing detected in the osd result. My program running smoothly, the log says it’s already loaded my model, just nothing detected.

What I’ve tried

  1. Decrease the confidence threshold to 0.01, nothing happened.
  2. Check the obj_meta generated after pgie, it’s completely empty, no objects would be counted.
  3. Double check the model, it can detect objects from images in TLT environment.
  4. Something similar with my problem: Custom YOLOv3 model in DeepStream 5.0

I’m not sure what’s causing your TLT model to work correctly in the notebook and not with the deepstream-custom app. Maybe you can try the main deepstream-app for YOLO making use of the plugins designed for parsing YOLO outputs located at:
/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo

Checkout the README there’s an option for loading TLT models.

-Dilip.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi @CoderJustin
Some questions:

  1. did you try fp16 or fp32?
  2. are the trt engine, libnvds_infercustomparser_yolov3_tlt.so all generated on the target Jetson platform?
  3. did you just try your tlt yolov3 model with GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream ?
  4. Current DS5.0 is verfied on Jetpack4.4DP release, could you install DS5.0 and Jetpack4.4DP with SDKManager and try again?

Thanks!