Add splitmuxsink to save multiple videos in deepstream-app

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• TensorRT Version 7
• NVIDIA GPU Driver Version (valid for GPU only) 440
• Issue Type( questions, new requirements, bugs) question

I’m trying to add splitmuxsink element to deepstream-app to save multiple videos based on size and time. Earlier I tried it in deepstream test app, where the pipeline code is quite simple. I removed qtmux and replaced filesink of sink with splitmuxsink, but doing the same in deepstream-app isn’t working.

The git diff of deepstream_sink_bin.c is:

--- a/DeepStream/deepstreamMgr/src/apps-common/deepstream_sink_bin.c
+++ b/DeepStream/deepstreamMgr/src/apps-common/deepstream_sink_bin.c
@@ -397,18 +397,18 @@ create_encode_file_bin (NvDsSinkEncoderConfig * config, NvDsSinkBinSubBin * bin)
   }
 
   g_snprintf (elem_name, sizeof (elem_name), "sink_sub_bin_sink%d", uid);
-  bin->sink = gst_element_factory_make (NVDS_ELEM_SINK_FILE, elem_name);
+  bin->sink = gst_element_factory_make ("splitmuxsink", elem_name);
   if (!bin->sink) {
     NVGSTDS_ERR_MSG_V ("Failed to create '%s'", elem_name);
     goto done;
   }
 
-  g_object_set (G_OBJECT (bin->sink), "location", config->output_file_path,
-      "sync", config->sync, "async", FALSE, NULL);
+  g_object_set (G_OBJECT (bin->sink), "location", "/mnt/lprsResults0/splitvideo%05d.mp4",
+      "max-size-time", 10000000000, "max-size-bytes", 10000000, NULL);
   g_object_set (G_OBJECT (bin->transform), "gpu-id", config->gpu_id, NULL);
   gst_bin_add_many (GST_BIN (bin->bin), bin->queue,
       bin->transform, bin->codecparse, bin->cap_filter,
-      bin->encoder, bin->mux, bin->sink, NULL);
+      bin->encoder, bin->sink, NULL);
 
   NVGSTDS_LINK_ELEMENT (bin->queue, bin->transform);
 
@@ -416,8 +416,7 @@ create_encode_file_bin (NvDsSinkEncoderConfig * config, NvDsSinkBinSubBin * bin)
   NVGSTDS_LINK_ELEMENT (bin->cap_filter, bin->encoder);
 
   NVGSTDS_LINK_ELEMENT (bin->encoder, bin->codecparse);
-  NVGSTDS_LINK_ELEMENT (bin->codecparse, bin->mux);
-  NVGSTDS_LINK_ELEMENT (bin->mux, bin->sink);
+  NVGSTDS_LINK_ELEMENT (bin->codecparse, bin->sink);
 
   NVGSTDS_BIN_ADD_GHOST_PAD (bin->bin, bin->queue, "sink");

The changed deepstream_sink_bin.c is on github gist.

The error is

** ERROR: <create_pipeline:1151>: Could not find 'sink' in 'sink_sub_bin_sink1'
** ERROR: <create_pipeline:1283>: create_pipeline failed
** ERROR: <initDeepStream:1512>: Failed to create pipeline
Quitting
App run failed

The error is "Could not find ‘sink’ in 'sink_sub_bin_sink1‘ ”, are you sure ‘sink_sub_bin_sink1’ is the encode sink? Can you show your deepstream-app config file?

Hi @Fiona.Chen, the config is (the original deepstream-app works on this config)

##############################
# NOTE: This file is automatically created by dsConfigParser. Don't edit it!
##############################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=/var/log/deepstreamLog/
kitti-track-output-dir=/var/log/deepstreamLog/

[tiled-display]
enable=1
width=1920
height=1080
rows=1
columns=1
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L#2 2=URI 3=MultiURI 4=RTSP
type=2
uri=file:///home/myvideo.h264
num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
# 1=Fakesink, 2=EGL (nveglglessink), 3=Filesink, 4=RTSP, 5=Overlay (Jetson only)
type=3
output-file=/mnt/lprsResults0//D191-20210204122128.mp4
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
source-id=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=12
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;0.5
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
batch-size=1
## Set muxer output width and height
width=1920
height=1080
##Boolean property to inform muxer that sources are live
live-source=0
gpu-id=0
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
#interval=2
gie-unique-id=1
nvbuf-memory-type=0
model-engine-file=/home/model_b1_gpu0_fp32.engine
config-file=pgie_detector_config.txt

[tracker]
enable=1
# For the case of NvDCF tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so
#ll-config-file required for DCF/IOU only
ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=1
#enable-past-frame=0

[tests]
file-loop=0

There is only one sink in your config file, why there is 'sink_sub_bin_sink1‘? There should only be 'sink_sub_bin_sink0‘. You can debug this part.

Are you sure it’s sink_sub_bin_sink0? Because I just checked with the original app, and official config files, when I print the elem_name for sink, it’s sink_sub_bin_sink1.

If I add one more sink, it prints

sink: sink_sub_bin_sink1
sink: sink_sub_bin_sink2

I think the problem is probably how I’m connecting elements in the pipeline.

Your code will cause the deepstream-app fail to add sink probe because your sink plugin’s sink pad name is “video” but not “sink”.

In /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-app/deepstream_app.c, in the function create_pipeline(), there is following lines to add the latency measurement probe to sink plugin.

NVGSTDS_ELEM_ADD_PROBE (latency_probe_id,
pipeline->instance_bins->sink_bin.sub_bins[0].sink, “sink”,
latency_measurement_buf_prob, GST_PAD_PROBE_TYPE_BUFFER,
appCtx);

You need to change them to :

NVGSTDS_ELEM_ADD_PROBE (latency_probe_id,
pipeline->instance_bins->sink_bin.sub_bins[0].sink, “video”,
latency_measurement_buf_prob, GST_PAD_PROBE_TYPE_BUFFER,
appCtx);

If you want to add new function in deepstream-app, you need to guarantee every related function can work. You must take some time to get familiar with deepstream sample codes first.

Thanks a lot @Fiona.Chen! Got it.

Even after changing “sink” to “video”, the videos are not getting saved. I get 0 FPS and below error

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
ERROR from muxer: Downstream is not seekable - will not be able to create a playable file
Debug info: gstqtmux.c(2780): gst_qt_mux_start_file (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/GstSplitMuxSink:sink_sub_bin_sink1/GstMP4Mux:muxer
[NvDCF] De-initialized
App run successful

The default muxer in splitmuxsink is mp4mux.

Solved it using enc-type=1, earlier it was using hardware encoder (nvv4l2h264enc) which was not working with splitmuxsink, but software enc (x264enc) works.

1 Like