Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 525.105.17
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I need to store chunks of incoming video to file with the start (and end?) frameid, used in the metadata, as part of the filename(s). If frameid is not possible (or not optimal), then potentially a timestamp (as in the test5 smart-record sample) could be useful. I’d prefer not to use NvDsMultiSrcInput/NvDsSampleC2DSmartRecordTrigger/NvDsMsgBrokerC2DReceiver (as in test5), because I need to store all video and I don’t want to send start/stop messages continuously.
I’m trying to build a graph using Deepstream graph composer, which saves the incoming video stream to disk in separate files containing 10s of video duration each (duration of 10s is not crucial, could be a different duration). I’ve tried using the following set of components:
- nvidia::deepstream::NvDsMultiSrcInputWithRecord (configured with smart-rec-default-duration=10), associated to
- nvidia::gxf::PeriodicSchedulingTerm (configured with recess_period=10s)
The idea is to trigger the NvDsRecordAction every 10 seconds, and store the video output with a duration of 10 seconds.
I could not find any documentation on how to configure and get working the components NvDsRecordAction and PeriodicSchedulingTerm. The Deepstream SDK documentation in the links below only describe the components’ individual configuration settings, and not how to connect or make these components work together in a graph.
Using graph composer, it is possible to drag PeriodicSchedulingTerm on the graph canvas, but I couldn’t find a way to connect it to NvDsRecordAction. The graph’s .yaml then contains:
--- components: - name: periodic_scheduling_term5 parameters: recess_period: 10s type: nvidia::gxf::PeriodicSchedulingTerm name: PeriodicSchedulingTerm ui_property: position: x: -1360.8486328125 y: 106.59441375732422
By coincidence, I also noticed that it is possible to drag PeriodicSchedulingTerm ‘into’ NvDsRecordAction, so that it becomes part of that component (as visually represented in graph composer). The graph’s .yaml then contains:
--- components: - name: Record Action0 type: nvidia::deepstream::NvDsRecordAction - name: periodic_scheduling_term0 parameters: recess_period: 10s type: nvidia::gxf::PeriodicSchedulingTerm name: Record Action ui_property: position: x: -1060.3350830078125 y: 156.1263885498047
- What is the best way to achieve contiguous chunks of all incoming video stored in files with filenames containing the frameid (or timestamps), so that frame metadata (containing frameid and timestamp), stored separately, can be associated again to these files?
- Where can I find documentation/guidance on how to use the nvidia::gxf components in a deepstream composer graph, including PeriodicSchedulingTerm?
Thanks in advance!