Tracker Suppressing Detections

Please provide complete information as applicable to your setup.

**• Hardware Platform- Jetson Orin
**• DeepStream Version- 6.2
**• JetPack Version (valid for Jetson only)- 5.1
**• TensorRT Version-

My pipeline: nvarguscamerasrc → streammux → nvdspreprocess → nvinfer (Yolo for object detection) → nvtracker → udpsink/filesink.
When I exclude the tracker, detcetions are observed but with too many false positives in my detections. My camera is in motion while the objects I’m detecting remain stationary.
I’ve experimented with NvDCF, NvSORT, and IOU trackers, but none of them seem to be providing any detections. Currently, I’m using the default NvDCF_max_perf.yml configuration file. What adjustments should I make to ensure the tracker functions properly? I’ve set interval=0 in my nvinfer config file.

Do you mean that the detection results are very poor without a tracker?

What do you ultimately want to achieve?

I’m aiming to achieve real-time object detection at a speed of 3 meters per second. I’ve experimented both with and without using a tracker during inference.

When I don’t use a tracker:

  1. Most objects, including small ones, are detected.
  2. However, there’s a high number of false positives.
  3. The stream runs smoothly without any noticeable lag at 20fps.

But when I use a tracker:

  1. Only larger objects are consistently detected.
  2. False positives are reduced.
  3. However, smaller objects and few large objects are missed.
  4. Occasionally, there’s a delay in the stream, even dropping to 16fps.

I am sharing my config file and pipeline that uses the tracker. Could you advise on adjustments to ensure that detections aren’t suppressed by the tracker and that smaller objects are detected?
config_infer_primary_yoloV5_320_exp38_fp16_0.04.txt (701 Bytes)
config_tracker_NvDCF_max_perf.txt (5.2 KB)
pipeline.txt (927 Bytes)

I also noticed that if probation age>0, then no detections are observed at all.

The tracker will improve the loading, it’s normally. Could you dump the video source from the camera and attach that to us? If you can attach the model together, we can reproduce it on our side.

How can I share it to you personally?

You can just click my icon and message that to me.

Please check

Customer Questions:
A: What changes should be made in the tracker’s config file in order to:
1- Reduce false positives
2- Detect small objects
3- Increase FPS

B: What should my tracker height/width be? My model/ROI size which is 320x320 or my entire frame size which is 3840x2160?

C: If the same object occurs in 2 ROIs partially, how can I consider it as one object only instead of 2?

D: Is there any improvement that can be made in the pipeline?

E: My detected objects are getting wrongly classified, why and how can I fix it?

F: Will using usehog parameter in the tracker’s config file be better for me?

Please suggest any other changes as well if needed.

Hi @gt3rs , about some basic questions, you can refer to our Guide and FAQ first.

We can not recognize the same object for these particular scenes.