How to save frames from different sources separately with streamdemux?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson AGX Xavier
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only): 4.4
• TensorRT Version: 10.2
• Issue Type( questions, new requirements, bugs): question

I have a pipeline that look like this:

camera-sources -> streammux -> pgie -> tracker -> streamdemux |-> sink_bin01
                                                              |-> sink_bin02

where each sink_bin is as follow:

nvvideoconvert -> capfilter -> videoconvert -> capfilter -> jpegenc -> multifilesrc 

Dot file:

The frames of each source will be save separately but it doesn’t, if I use a tiler and save batched frame using multifilesink then it works, but when I use a demux to save frame from each source separately then it’s not working, the pipeline runs but nothing happens and no error is print out. What did I do wrong here?

No. There is no useful information in your description.

Hi @Fiona.Chen,

Hope this contain more info.

So I simplify the pipeline even further:

2 camera-sources -> streammux -> pgie -> tracker -> streamdemux |-> sink_bin01
                                                                |-> sink_bin02

Now the sink_bin are simply:

nvegltransform -> nveglglessink

So now the pipeline will run the inference then display each source on a separate video screen.

I also attached a probe at streamdemux sink pad, the probe function look as follow:

static GstPadProbeReturn
test_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer u_data)
  GstBuffer *buf = (GstBuffer *)info->data;
  NvDsFrameMeta *frame_meta = NULL;
  NvDsObjectMeta *obj_meta = NULL;
  NvDsDisplayMeta *display_meta = NULL;
  NvDsMetaList *l_frame = NULL;
  NvDsMetaList *l_obj = NULL;

  // extracts the NvDsBatchMeta from the Gst Buffer
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
  printf("Frames per batch: %d\n", batch_meta->num_frames_in_batch);

  // process each NvDsFrameMeta in NvDsBatchMeta
  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next)
    frame_meta = (NvDsFrameMeta *)(l_frame->data);
    printf("batch-id (%d): pad-index (%d): source-id (%d): frame-num (%d)\n",
           frame_meta->batch_id, frame_meta->pad_index, frame_meta->source_id, frame_meta->frame_num);

    // process each NvDsObjectMeta in a NvDsFrameMeta
    for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next)
      obj_meta = (NvDsObjectMeta *)(l_obj->data);
      // printf("Tracker-id : %ld, label: %s |", obj_meta->object_id, obj_meta->obj_label);

When I ran the pipeline, it freezed after the first frame and a screen even displayed nothing, please see photo:

When ran with 1 video source, the pipeline ran as normal but when I ran with 2 camera sources, it freezed after the first batch. How do I locate the reason for this problem and fix it?

Please let me know if you need more info.

According to your description, the pipeline is OK. So the problem happens with your code.
Can you simplify your code(remove the part which has nothing to do with deepstream) and upload the code?

Hi @Fiona.Chen,

Please find the code bellow:

test_demux.c (15.9 KB)

pgie_config.txt (4.4 KB)

Makefile (2.1 KB)

Model & yolo3tlt custom parser: GitHub - NVIDIA-AI-IOT/deepstream_tlt_apps: Sample apps to demonstrate how to deploy models trained with TLT on DeepStream

Both cameras are Logi C270 HD webcam.

The attached file can work. My camera is a little different to yours, please change the camera caps to your camera properties. test_demux.c (16.4 KB)