After inference, Is there a way to store the portion of the video clip which has the anomaly?

Hi,

I am new to Deepstream SDK, and evaluating it for a use case, which involves saving only the parts of the video clip which has the anomaly.

Any guidance as to how it can be achieved?
Also, how to retrieve related metadata for logging it to video library, assuming input is multiple video streams?

Hi
What do you mean anomaly?
we have anomaly detection app, please see if can meet your requirements.

By anomaly i meant inference.

After inference, how can the stream be split to save only the portion of the streaming video,

Do you want to save the frame with bbox blended or just save the frame which have inference result?
if the first one, we do not support this, if the second one you can use dsexample plugin to save the frame.

You can save video clips of the “anomaly” or maybe you could call it “detection” using the new DS5.0 smart record feature. Nvidia provide the test5 sample app to show how it can be used where it records when required BEFORE decoding.
But… if you want bounding boxes drawn on the frames you can also use it AFTER the Tiler/Demuxer components. I am doing this successfully now so it should suit your purposes. I record 15 second clips but the duration is configurable and you can even pre-record so you get x-seconds before the detection is made if required.

I currently have some minor timing issues in the recorded files but other than that this works well and is stable.

2 Likes

Thanks J

I also came across this feature after i posted the query, yet to try it out.
Thanks for your response!

Cheers.

hello sir, i want to use smart record to save the video clips with detection results. In your reply, you said " you can also use it AFTER the Tiler/Demuxer components". What does this mean and how to implement this ? Thanks.

So the example provided by nvidia : deepstream-test5 shows the use of the smart record bin prior to decoding the source stream. This is an efficient approach however you don’t get any bounding boxes or other metadata drawn onto frames.

So you could save the inference/tracker metadata and use some other process to draw the bboxes onto the recorded files but you’d need to be careful about timing.
Or you can add the smart record bin after the inference, tracker and osd elements. As the SR bin requires encoded data you’ll have to re-encode the stream with the nvv4l2h264/5enc element first.

hi, thanks for your quick reply. For your first solution, we already used MMPEG to succeed in saving frames with bounding boxes. However we want to do this in deepstream. we will try the second solution. Does the SR bin require the output stream to be re-encoded all the time, or only those with detection boxes? whether this can be configured?

I find it best to full-time re-encode. Its done in hardware and there seems to be no perceptible performance difference.

Ok, we will try this solution. Thank you very much.