Please provide complete information as applicable to your setup.
**• Hardware Platform ----------> GPU
**• DeepStream Version ---------> 7.0
**• TensorRT Version -------> 8.6.0.2
**• NVIDIA GPU Driver Version 535.230.12
Let me explain you the scenario properly,I have a infer model (person-model given by the nvidia detector), inside infer.txt file “interval= 3” is given.After infer we have tracker.
When we are printing obj_meta.confidence
it should call infer after 3 frames right ?
I am attaching the scream shot to you to understand it properly
Which type of nvtracker are you using? NvDCF(/opt/nvidia/deepstream/deepstream-7.1/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml) tracker can support PGIE inverval > 0 and predict the bboxes if the frame skipped inference: Gst-nvtracker — DeepStream documentation
We are encountering a consistent issue across all supported trackers in DeepStream, and we would appreciate your assistance in verifying and resolving it.
We are configuring our nvinfer element with interval=3, expecting the following behavior:
Every 4th frame (interval + 1) should trigger a fresh detection by nvinfer.
The intervening frames (3 frames) should rely on the tracker (e.g.,Nvsort, IOU, NvDCF) for object propagation.
However, what we observe in practice is that nvinfer is performing detections at arbitrary intervals, not consistently every 4th frame as expected. This behavior persists regardless of the tracker backend we use, suggesting that the inconsistency may be rooted in nvinfer or its coordination with the tracker module.
What We’ve Tried:
Used multiple tracker types (NvDCF, Nvsort, etc.)
Ensured interval=3 is correctly set in the nvinfer configuration
Verified the pipeline is running in real-time without frame drops
Yet, the detection cadence from nvinfer remains irregular.
Could you replicate the issue and get back to us ASAP ?
How you check this? Are you add probe at the src pad of nvinfer? nvinfer is open source plugin. You can add log to conform the behavior. Please use NvDCF tracker when you set PGIE inference interval.
We are taking pad from “fakesink” added pad probe callback, there I am printing
print(f"label------------------------------------------> ",obj_meta.obj_label,obj_meta.object_id,obj_meta.confidence)
As our interval is 3 we are suspecting next 4th frame it will detect and obj will get the confidence >0. That’s what we shown you above picture ! We are using NVDCF tracker,tried with three different *.yml performance ,accuracy and pref.We are not saying it’s infer issue the inconsistency may be rooted innvinfer or its coordination with the tracker module.
We are requesting you to replicate the issue, get back to us ASAP ?
I’ve identified the root cause of an issue we’re facing and would like your advice on a more robust solution.
Context:
We’re building a dynamic DeepStream pipeline where cameras (RTSP streams) can be added, modified, or removed at runtime. To keep the pipeline alive during idle periods (i.e., when no RTSP sources are attached), we use a default .mp4 video file as a persistent source in the pipeline.
This default stream acts as a “heartbeat” source to keep the pipeline from terminating unexpectedly.
Problem:
When both the .mp4 stream and an RTSP camera are active:
The .mp4 stream (with high FPS) seems to interfere with how nvinfer manages the interval property.
For example, with interval=3, we expect detection every 4th frame. However, this is not reliably respected — nvinfer appears to detect at irregular intervals.
The behavior appears to be caused by the mismatch in frame rates between the file stream and the RTSP stream, leading to scheduling and buffer timing inconsistencies.
Goal:
We want to:
Keep the pipeline alive even when no real camera streams are attached
Avoid disrupting nvinfer behavior, especially when interval > 0 is used
Maintain correct detection cadence when RTSP streams are dynamically added or removed
Questions for Guidance:
Is there a better way to keep the pipeline alive without relying on a default .mp4 stream?
Can we configure nvinfer or the pipeline to ignore or isolate the .mp4 stream from impacting detection intervals on live streams?
Is there a recommended approach for multi-source, dynamic camera management that avoids issues with interval?
We’d greatly appreciate your insights or suggestions on best practices to achieve this kind of dynamic and resilient pipeline setup without affecting inference timing.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks