How can the analytic results be properly rendered onto the output video?

How can custom analytic results be rendered onto the output in WebRTC?

It needs to meet the following requirements:

  1. Render the analysis within the video using the received metadata in real-time and low latency.
  2. Render the analysis within the video during playback using the stored metadata.

It is known that VST does not have the above functions, so what would be the best way to implement it?

  1. Rendering within DeepStream and playing the video using VST, but requiring additional storage for the video with the analytic results.
  2. Rendering analytic results using a media server like Pion.
  3. Receiving the video and analytic data on the frontend or playback end and rendering it.

The analytics result(counting) already displayed on the VST. What is your requirement?

I want to draw object bounding boxes or class labels on both real-time and playback video.

We will consider another VMS / media server solution.
Because VST does not provide plug-ins and customization capabilities.

Display bboxes and lables are in the roadmap of VST. Stay tuned.