I am working on a project to detect text fields from passing trucks and perform OCR on these text fields. The detection model I am using is a custom YOLOv5 model that has been converted to a TensorRT engine and the DS6 pipeline contains various analytics elements such as ROI-s, Linecrossings and Direction Detection along with a Tracker plugin.
For some of the videos I noticed that the pipeline failed to detect any objects throughout entire videos, while performance was very good for other videos with the same settings. I checked detection performance of the original YOLOv5 model by using the detect.py script provided in the YOLOv5 repo and model performance was perfect there - all objects were detected in ~95% of the frames in all videos. I also tried running the model in a simple C++ test app with no trackers, analytics or anything, just streammux, pgie, and osd and with this app the model performance was on-par with the detect.py script.
When debugging this issue I noticed that when the Tracker is removed from my pipeline, performance is similar to what I see in the detect.py script, indicating that the Tracker is somehow interfering with the detection results. Removing the Tracker from the pipeline is unfortunately not an option for me, as several of my analytics functions require a tracker to be present.
I tried several trackers (IOU,NvDCF, and DeepSort) and none of them conclusively fixed the issue. For all the trackers I used the standard configurations as found in the sample applications Pipeline performance on test videos depended greatly on which trakcer I used. Some videos worked very well with NvDCF but had no output with DeepSort, other videos had the opposite (no output with NvDCF, but near perfect output with DeepSort).In general though, all trackers made the performance of the pipeline significantly worse than having no tracker (which is not an option for my application).
In the end my two main questions are:
Why do the trackers get rid of PGIE output and is there any way to prevent this
Why do the different trackers perform totally differently on videos
Unfortunately the project is not open-source and as such I am not able to share the exact application and videos where the issue occurs. I managed to recreate the issue with a slightly different DeepStream app though. I have attached a zip file with everything needed to reproduce the error including models and test videos.
It is clearly visible that when the tracker is disabled/unlinked in the pipeline, more frames have bounding boxes for several of the objects compared to when the tracker is active even when the PGIE interval is set to ā0ā. deepstream-test.zip (99.8 MB)
Have there been any updates since sending the files? This issue needs urgent fixing for my project so any additional information would be very welcome :D
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Hi, sorry for the delay in answering, the topic posted for reference did not solve the issue. Setting maxShadowTrackingAge=0 and earlyTerminationAge=0 as suggested did not help to sustain the bounding boxes in our application. It would be appreciated if the topic could be reopened so a working solution could be found.