Extracting frames from VST (Video storage toolkit)

I am trying to run VST with an rtsp streams. How can i extract frames from a past timestamp.

Scenario:

  1. RTSP stream running and started at t=0
  2. At t = 100 i want to extract frame at say t=20

How can i achieve the above. I went through the list of APIs provided at Swagger UI
I was not able to find any API which can help me extract a frame/set of frames/video clips from past time.

Is there any API that can help in extracting it, if Yes, can you please point to that API, if No is there any way to accomplish the given task

VST will store the RTSP video. User can playback those stored video with brower through webRTC. There are many ways to extract video frames from videos based on your use case. Can you share more details on your use case? So we can improve VST or suggest other ways to extract video frames. Are your want the feature on Jetson or Nvidia dGPU?

My use case is something like this:

Camera → VST → Deepstream → Message broker → App logic

  • VST is setup with cameras
  • Deepstream app reads streams with VST and performs prediction
  • Predicted result goes to Message broker
  • App logic reads from broker and performs analysis. If any event or activity is observed, it needs to get that specific frame or a list of frames for the evidence

E.g.
A object detection use case – Count no of people in a frame and alert if no of people are > 5 + some other logic
Once the application logic reads from the broker and sees no of people > 5 + some other logic are observed, it will need to get that specific frame and send to cloud.

I am looking to extract the frame programmatically.
I am running all the components on a physical server with dGPU.

Do you want to send the object crop or the video frame to cloud? DeepStream has sample to send object crop to cloud with JPEG image. Is it possible to implement the analysis in DeepStream container? So DeepStream application can decide if need send the video frame to cloud based on analysis’s result.

I am planning to send entire frame to cloud. Any specific reason to put the analysis in Deepstream container instead of a logic container?

As you say you want to send specific video frame to cloud based on analysis relult(no of people > 5 + some other logic are observed), so I suggest to put the analysis in the DeepSteam container.

Can you please help in understanding the reason behind it like better performance, scaling, throughput etc?

I am trying to make a decision based on the difference that can happen if put the analysis in the Deepstream container vs logic container

If the analysis located in DeepStream application, the application can send current video frame to cloud based on the result of analysis. If the analysis located in logic container, you need SW to get the past video frame. Or you can extract the video frame from stored video based on the analysis result later when you check the result.

VST has REST API to get a video snippet or JPEG image between two timestamps for a given sensor ID. The video snippet is H264 encoded video. Is it what you want for your project?

Thanks. Thats wha i am looking for.

One last question, i see vst recording videos of only 60 seconds. Is that the only option or it can be increased

You can’t change it. But you can request the stored video based on: Recording in VST(Video storage toolkit) - Intelligent Video Analytics / Metropolis Microservices for Jetson - NVIDIA Developer Forums

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.