The overall requirement is this:
1. Multiple live sources (rtsp) with different resolution and frame rates should be supported
2. For each source do inferencing, tracking and analytics
3. For each source, use the object and analytics meta data to apply user defined rules to trigger actions (e.g. If a person crosses a line or enters a certain region, save a recording and notify an external system)
4.1 The recorded file should be the original stream (resolution, framerate) with no overlays
4.2 The meta data for each frame of the recorded file should be saved to file or database (this is to allow for drawing the detections during playback onto the orignal streams, and also to allow for searching for objects/events)
Most of this can be done currently with SmartRecord as it is saving the original streams as per the image
The issue I am facing is with requirement 4.2. I am not sure if it is even possible to save the meta data for the exact frames that SmartRecord saves.
I would think that one would need a plugin to save the meta data, with the exact same logic as SmartRecord, after the Deepstream components to enable saving the correct metadata (e.g. SmartMetaRecord). Unless there is a different way of matching up meta data with the recorded file, e.g. a frame timestamp instead of frame index.
Any suggestions would be much appreciated.