Deepstream v5 unexpected realtime results of models


I have trained mobilenet and detectnet_v2 with TLT app, they both gives high accuracy with test dataset, but Detectnet_v2 model makes a lots of false positive predictions, however when i deploy models and use them with deepstream , they gives very poor detection performance with camera.

When i did not prune models, FPS decreases to 10-14 while the model size just 8MB and still poor detection performance.

1)What can i do to get accabtable FPS with good accuracy of detections?

I had trained Pytorch mobilenet_V1 with the same dataset and inference with it with same camera settings, and this model was more accurate than the TLT models, also it had more performance speed than TLT models without prune process with TRT , but unfortunately i could not integrate to deepstream so i had to use with opencv but at this time there is no change to use multiple video sources with good fps.

  1. How can i include onnx model to make predictions but get video source with Deepstream?

Thank you for your support

• Hardware Platform (Jetson / GPU) Jetson tx2
• DeepStream Version v5
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only) -

1 Like

You mentioned that detectnet with mobilenet backbone can get high mAP. Please run tlt-infer to confirm. I cannot understand why there are lots of FP when the mAP is high.