• Hardware Platform (Jetson / GPU) : NVIDIA Jetson AGX Orin
• DeepStream Version : 7.0
• JetPack Version (valid for Jetson only) : 6.0
• TensorRT Version : 8.6.2.3
• Issue Type( questions, new requirements, bugs) : question
I have this GStreamer pipeline that I would like to recreate in Python later.
gst-launch-1.0 -v filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! 'video/x-h264, framerate=60/1' ! h264parse ! nvv4l2decoder ! nvv4l2h265enc iframeinterval=60 idrinterval=60 ! h265parse ! splitmuxsink location=output/video_R%02d.h265 max-size-time=1000000000
How can I replace splitmuxsink with multifilesink in the pipeline above? The original pipeline uses splitmuxsink
with the max-size-time
property to split output files based on time. It splits the incoming video into 1 second videos and saves into different files. However, when switching to multifilesink, finding the exact equivalent behavior can be challenging. Although I discovered the max-file-duration
property in multifilesink, it doesn’t function the same way. Additionally, there is a next-file
property that supports a key-frame
value for file splitting, but it doesn’t fully replicate the behavior.
What is the correct combination of properties for multifilesink that would mimic the time-based file splitting provided by splitmuxsink in this scenario?