How to prevent annotated frames being written to unannotated file sink?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GeForce RTX 4080
• DeepStream Version: nvcr.io/nvidia/deepstream:6.3-triton-multiarch
• TensorRT Version
**• NVIDIA GPU Driver Version (valid for GPU only): Driver Version: 550.67 CUDA Version: 12.4 **
• Issue Type: Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have a pipeline ingesting from an RTSP source (nvurisrcbin) that, after inference, I want to split (tee) into annotated and unannotated files. The files are H264 encoded .mp4 files. However, some frames in the unannotated video files are actually the annotated frames. I believe this is due to both branches after the tee using the same buffer in memory, and sometimes nvosd is annotating the frames before they are written to the unannotated files.

My question is, how can I ensure that the unannotated videos contain only unannotated frames, while otherwise keeping the caps of the annotated and unannotated videos the same? I.e. I want both videos to have the same resolution and frame rate etc.

My pipeline is shown below (albeit missing the nvurisrcbin at the start):

Could you try to add the tee plugin before the nvstreammux?

There is no nvstreammux in this pipeline?

The complete pipeline should be urisrcbin->nvstreammux-><the pipeline you attached>.
What I mean is adding the tee plugin between the nvstreammux and the urisrcbin.

Yes I will try that.

For my understanding, could you explain what the reasoning for that would be? It will aid my mental model of what is happening under-the-hood.

Thank you

It may be a bug for DeepStream 6.3. But we have fixed similar issues on DeepStream 6.4. You can try that with our latest version.

Thanks. Is what I am observing actually a bug though, or is it just the expected behaviour of teeing the pipeline when using NVMM buffers?

It’s actually a bug. You can use the DeepStream 6.4 Version to check that.

Ok thanks. I’m unable to update to 6.4 at this time and will for now workaround the issue by moving the tee higher up the pipeline to before inference.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.