How can custom analytic results be rendered onto the output in WebRTC?
It needs to meet the following requirements:
- Render the analysis within the video using the received metadata in real-time and low latency.
- Render the analysis within the video during playback using the stored metadata.
It is known that VST does not have the above functions, so what would be the best way to implement it?
- Rendering within DeepStream and playing the video using VST, but requiring additional storage for the video with the analytic results.
- Rendering analytic results using a media server like Pion.
- Receiving the video and analytic data on the frontend or playback end and rendering it.