Create pipeline save files sink for each stream in multiple streams input

Hi, I’m going on with a pipeline that thought from multi-stream-> infer → multi sink-files (video/images).

Previous work I add tiled element multi-stream-> infer → tiled → single sink-files to make sure the pipeline is fine and indeed it’s OK. But like I said before, I want to have multi sink-files for each stream. Example, uri_0 → mp4_0; uri_1 → mp4_1; … and so on.

I had found many information in gststreamer pipeline tutorial/example but I do not see any solution! Hope that can find a answer here. Thank!

You can use demux to implement it:
gst-launch-1.0 nvstreammux name=mux batch-size=2 ! nvinfer config-file-path=./dstest1_pgie_config.txt ! nvstreamdemux name=demux
filesrc location=./sample_720p.h264 ! h264parse ! nvdec_h264 ! queue ! mux.sink_0
filesrc location=./sample_720p2.h264 ! h264parse ! nvdec_h264 ! queue ! mux.sink_1
demux.src_0 ! “video/x-raw(memory:NVMM), format=NV12” ! queue ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=RGBA” ! nvdsosd font-size=15 ! nvvideoconvert ! “video/x-raw, format=RGBA” ! videoconvert ! “video/x-raw, format=NV12” ! x264enc ! qtmux ! filesink location=./out3.mp4
demux.src_1 ! “video/x-raw(memory:NVMM), format=NV12” ! queue ! nvvideoconvert ! “video/x-raw(memory:NVMM), format=RGBA” ! nvdsosd font-size=15 ! nvvideoconvert ! “video/x-raw, format=RGBA” ! videoconvert ! “video/x-raw, format=NV12” ! x264enc ! qtmux ! filesink location=./out4.mp4

Hi @bcao!

I tried following your idea that using nvstreamdemux to implement it. Although I run your pipeline failed (I using Telsa p100). Bellow is my code base on deepstream-app-test3 (to create multi stream by source_bin) and back-to-back-detectors-app (2 detector engine in pipeline). Now it is not working. But if I use tiled element to display the output in grid is fine. I changed code to create the sink_bin to generate output but not working, but the pipeline still run, enter input stream and do inference, just do not sink out put. I thought the problem still between my sink_bin and nvstreamdemux. Can you help me to find out how, please!

/*
 * Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */

#include <gst/gst.h>
#include <glib.h>
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <time.h>
#include "gstnvdsmeta.h"

#define MAX_DISPLAY_LEN 64

#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2

#define SGIE_CLASS_ID_LP 1
#define SGIE_CLASS_ID_FACE 0

/* Change this to 0 to make the 2nd detector act as a primary(full-frame) detector.
 * When set to 1, it will act as secondary(operates on primary detected objects). */
#define SECOND_DETECTOR_IS_SECONDARY 1

/* The muxer output resolution must be set if the input streams will be of
 * different resolution. The muxer will scale all the input frames to this
 * resolution. */
#define MUXER_OUTPUT_WIDTH 1280
#define MUXER_OUTPUT_HEIGHT 720

/* Muxer batch formation timeout, for e.g. 40 millisec. Should ideally be set
 * based on the fastest source's framerate. */
#define MUXER_BATCH_TIMEOUT_USEC 4000000

/* NVIDIA Decoder source pad memory feature. This feature signifies that source
 * pads having this capability will push GstBuffers containing cuda buffers. */
#define GST_CAPS_FEATURES_NVMM "memory:NVMM"

gint frame_number = 0;
gchar pgie_classes_str[4][32] = { "Vehicle", "TwoWheeler", "Person",
  "Roadsign"
};

#define PRIMARY_DETECTOR_UID 1
#define SECONDARY_DETECTOR_UID 2

/* Define global variables:
 * `frame_number` & `pgie_classes_str` are used for writing meta to kitti;
 * `pgie_config`,`input_mp4`,`output_mp4`,`output_kitti` are configurable file paths parsed through command line. */
clock_t t_start; 
clock_t t_end;

/* osd_sink_pad_buffer_probe  will extract metadata received on OSD sink pad
 * and update params for drawing rectangle, object information etc. */

static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
    GstBuffer *buf = (GstBuffer *) info->data;
    NvDsObjectMeta *obj_meta = NULL;
    guint vehicle_count = 0;
    guint person_count = 0;
    guint face_count = 0;
    guint lp_count = 0;
    NvDsMetaList * l_frame = NULL;
    NvDsMetaList * l_obj = NULL;
    NvDsDisplayMeta *display_meta = NULL;

    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
        int offset = 0;
        for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
                l_obj = l_obj->next) {
            obj_meta = (NvDsObjectMeta *) (l_obj->data);

            /* Check that the object has been detected by the primary detector
             * and that the class id is that of vehicles/persons. */
            if (obj_meta->unique_component_id == PRIMARY_DETECTOR_UID) {
              if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE)
                vehicle_count++;
              if (obj_meta->class_id == PGIE_CLASS_ID_PERSON)
                person_count++;
            }

            if (obj_meta->unique_component_id == SECONDARY_DETECTOR_UID) {
              if (obj_meta->class_id == SGIE_CLASS_ID_FACE) {
                face_count++;
                /* Print this info only when operating in secondary model. */
                if (SECOND_DETECTOR_IS_SECONDARY)
                  g_print ("Face found for parent object %p (type=%s)\n",
                      obj_meta->parent, pgie_classes_str[obj_meta->parent->class_id]);
              }
              if (obj_meta->class_id == SGIE_CLASS_ID_LP) {
                lp_count++;
                /* Print this info only when operating in secondary model. */
                if (SECOND_DETECTOR_IS_SECONDARY)
                  g_print ("License plate found for parent object %p (type=%s)\n",
                      obj_meta->parent, pgie_classes_str[obj_meta->parent->class_id]);
              }
            }
        }
        display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
        NvOSD_TextParams *txt_params  = &display_meta->text_params[0];
        display_meta->num_labels = 1;
        txt_params->display_text = g_malloc0 (MAX_DISPLAY_LEN);
        offset = snprintf(txt_params->display_text, MAX_DISPLAY_LEN, "Person = %d ", person_count);
        offset += snprintf(txt_params->display_text + offset , MAX_DISPLAY_LEN, "Vehicle = %d ", vehicle_count);
        offset += snprintf(txt_params->display_text + offset , MAX_DISPLAY_LEN, "Face = %d ", face_count);
        offset += snprintf(txt_params->display_text + offset , MAX_DISPLAY_LEN, "License Plate = %d ", lp_count);

        /* Now set the offsets where the string should appear */
        txt_params->x_offset = 10;
        txt_params->y_offset = 12;

        /* Font , font-color and font-size */
        txt_params->font_params.font_name = "Serif";
        txt_params->font_params.font_size = 10;
        txt_params->font_params.font_color.red = 1.0;
        txt_params->font_params.font_color.green = 1.0;
        txt_params->font_params.font_color.blue = 1.0;
        txt_params->font_params.font_color.alpha = 1.0;

        /* Text background color */
        txt_params->set_bg_clr = 1;
        txt_params->text_bg_clr.red = 0.0;
        txt_params->text_bg_clr.green = 0.0;
        txt_params->text_bg_clr.blue = 0.0;
        txt_params->text_bg_clr.alpha = 1.0;

        nvds_add_display_meta_to_frame(frame_meta, display_meta);
    }

g_print ("Frame Number = %d Vehicle Count = %d Person Count = %d"
            " Face Count = %d License Plate Count = %d\n",
            frame_number, vehicle_count, person_count,
            face_count, lp_count);
    frame_number++;
    return GST_PAD_PROBE_OK;
}

static gboolean
bus_call (GstBus * bus, GstMessage * msg, gpointer data)
{
  GMainLoop *loop = (GMainLoop *) data;
  switch (GST_MESSAGE_TYPE (msg)) {
    case GST_MESSAGE_EOS:
      g_print ("End of stream\n");
      t_end = clock(); 
      clock_t t = t_end - t_start;
      double time_taken = ((double)t)/CLOCKS_PER_SEC; // in seconds 
      double fps = frame_number/time_taken;
      g_print("\nThe program took %.2f seconds to redact %d frames, pref = %.2f fps \n\n", time_taken,frame_number,fps); 
      
      g_main_loop_quit (loop);
      break;
    case GST_MESSAGE_ERROR:{
      gchar *debug;
      GError *error;
      gst_message_parse_error (msg, &error, &debug);
      g_printerr ("ERROR from element %s: %s\n",
          GST_OBJECT_NAME (msg->src), error->message);
      g_free (debug);
      g_printerr ("Error: %s\n", error->message);
      g_error_free (error);
      g_main_loop_quit (loop);
      break;
    }
    default:
      break;
  }
  return TRUE;
}

static void
cb_newpad (GstElement * decodebin, GstPad * decoder_src_pad, gpointer data)
{
  g_print ("In cb_newpad\n");
  GstCaps *caps = gst_pad_get_current_caps (decoder_src_pad);
  const GstStructure *str = gst_caps_get_structure (caps, 0);
  const gchar *name = gst_structure_get_name (str);
  GstElement *source_bin = (GstElement *) data;
  GstCapsFeatures *features = gst_caps_get_features (caps, 0);

  /* Need to check if the pad created by the decodebin is for video and not
   * audio. */
  if (!strncmp (name, "video", 5)) {
    /* Link the decodebin pad only if decodebin has picked nvidia
     * decoder plugin nvdec_*. We do this by checking if the pad caps contain
     * NVMM memory features. */
    if (gst_caps_features_contains (features, GST_CAPS_FEATURES_NVMM)) {
      /* Get the source bin ghost pad */
      GstPad *bin_ghost_pad = gst_element_get_static_pad (source_bin, "src");
      if (!gst_ghost_pad_set_target (GST_GHOST_PAD (bin_ghost_pad),
              decoder_src_pad)) {
        g_printerr ("Failed to link decoder src pad to source bin ghost pad\n");
      }
      gst_object_unref (bin_ghost_pad);
    } else {
      g_printerr ("Error: Decodebin did not pick nvidia decoder plugin.\n");
    }
  }
}

static void
decodebin_child_added (GstChildProxy * child_proxy, GObject * object,
    gchar * name, gpointer user_data)
{
  g_print ("Decodebin child added: %s\n", name);
  if (g_strrstr (name, "decodebin") == name) {
    g_signal_connect (G_OBJECT (object), "child-added",
        G_CALLBACK (decodebin_child_added), user_data);
  }
  if (g_strstr_len (name, -1, "nvv4l2decoder") == name) {
    g_print ("Seting bufapi_version\n");
    g_object_set (object, "bufapi-version", TRUE, NULL);
  }
}

static GstElement *
create_source_bin (guint index, gchar * uri)
{
  GstElement *bin = NULL, *uri_decode_bin = NULL;
  gchar bin_name[16] = { };

  g_snprintf (bin_name, 15, "source-bin-%02d", index);
  /* Create a source GstBin to abstract this bin's content from the rest of the
   * pipeline */
  bin = gst_bin_new (bin_name);

  /* Source element for reading from the uri.
   * We will use decodebin and let it figure out the container format of the
   * stream and the codec and plug the appropriate demux and decode plugins. */
  uri_decode_bin = gst_element_factory_make ("uridecodebin", "uri-decode-bin");

  if (!bin || !uri_decode_bin) {
    g_printerr ("One element in source bin could not be created.\n");
    return NULL;
  }

  /* We set the input uri to the source element */
  g_object_set (G_OBJECT (uri_decode_bin), "uri", uri, NULL);

  /* Connect to the "pad-added" signal of the decodebin which generates a
   * callback once a new pad for raw data has beed created by the decodebin */
  g_signal_connect (G_OBJECT (uri_decode_bin), "pad-added",
      G_CALLBACK (cb_newpad), bin);
  g_signal_connect (G_OBJECT (uri_decode_bin), "child-added",
      G_CALLBACK (decodebin_child_added), bin);

  gst_bin_add (GST_BIN (bin), uri_decode_bin);

  /* We need to create a ghost pad for the source bin which will act as a proxy
   * for the video decoder src pad. The ghost pad will not have a target right
   * now. Once the decode bin creates the video decoder and generates the
   * cb_newpad callback, we will set the ghost pad target to the video decoder
   * src pad. */
  if (!gst_element_add_pad (bin, gst_ghost_pad_new_no_target ("src",
              GST_PAD_SRC))) {
    g_printerr ("Failed to add ghost pad in source bin\n");
    return NULL;
  }

  return bin;
}

static GstElement *
create_sink_bin (guint index, gchar * uri)
{ 
  /*Sink output video*/

  GstElement *bin = NULL, *queue_sink = NULL, *nvvidconv_sink = NULL, 
             *filter_sink = NULL, *videoconvert = NULL, *encoder = NULL, *sink = NULL;
  GstCaps *caps_filter_sink = NULL;
  gchar bin_name[16] = { };
  gchar folder_path[50] = { };

  g_snprintf (bin_name, 15, "sink-bin-%02d", index);
  /* Create a source GstBin to abstract this bin's content from the rest of the
   * pipeline */
  bin = gst_bin_new (bin_name);

  queue_sink = gst_element_factory_make("queue", "queue_sink");
  nvvidconv_sink = gst_element_factory_make("nvvideoconvert", "nvvidconv_sink");
  filter_sink = gst_element_factory_make("capsfilter", "filter_sink");
  g_object_set(G_OBJECT(filter_sink), "caps", caps_filter_sink, NULL);
  gst_caps_unref(caps_filter_sink);
  videoconvert = gst_element_factory_make("videoconvert", "videoconverter");
  
  /*this encoder used to for encoder sink images file jpg*/
  encoder = gst_element_factory_make("jpegenc", "jpeg-encoder");
  
  sink = gst_element_factory_make ("multifilesink", "multifiles-renderer");

  if (!bin || !queue_sink || !nvvidconv_sink || !filter_sink || !videoconvert || !encoder || !sink) {
    g_printerr ("One element in sink bin could not be created.\n");
    return NULL;
  }

  g_snprintf (folder_path, 50, "/workspace/iid-%d/", index);
  strcat(folder_path, "image_%d.jpg");
  g_print(folder_path);
  //g_object_set(G_OBJECT(sink), "location", folder_path, NULL);
  g_object_set(G_OBJECT(sink), "location", "/workspace/iid-0/image_%d.jpg", NULL);

  /* Connect to the "pad-added" signal of the decodebin which generates a
   * callback once a new pad for raw data has beed created by the decodebin */
  // g_signal_connect (G_OBJECT (uri_decode_bin), "pad-added",
  //     G_CALLBACK (cb_newpad), bin);
  // g_signal_connect (G_OBJECT (uri_decode_bin), "child-added",
  //     G_CALLBACK (decodebin_child_added), bin);

  gst_bin_add_many (GST_BIN (bin), queue_sink, nvvidconv_sink,
    filter_sink, videoconvert, encoder, sink, NULL);

  gst_element_link_many (queue_sink, nvvidconv_sink,
    filter_sink, videoconvert, encoder, sink, NULL);

  /* We need to create a ghost pad for the source bin which will act as a proxy
   * for the video decoder src pad. The ghost pad will not have a target right
   * now. Once the decode bin creates the video decoder and generates the
   * cb_newpad callback, we will set the ghost pad target to the video decoder
   * src pad. */
  if (!gst_element_add_pad (bin, gst_ghost_pad_new_no_target ("sink",
              GST_PAD_SINK))) {
    g_printerr ("Failed to add ghost pad in source bin\n");
    return NULL;
  }

  return bin;
}

int
main (int argc, char *argv[])
{
  GMainLoop *loop = NULL;
  GstElement *pipeline = NULL, *source = NULL, *h264parser = NULL,
      *decoder = NULL, *streammux = NULL, *streamdemux = NULL, *primary_detector = NULL,  
      *secondary_detector = NULL, *nvvidconv = NULL, *nvosd = NULL;
  
  GstBus *bus = NULL;
  guint bus_watch_id;
  GstPad *osd_sink_pad = NULL;
  guint i, num_sources;
  guint pgie_batch_size, sgie_batch_size;

  /* Check input arguments */
  if (argc < 2) {
    g_printerr ("Usage: %s <uri1> [uri2] ... [uriN] \n", argv[0]);
    return -1;
  }
  num_sources = argc - 1;

  /* Standard GStreamer initialization */
  gst_init (&argc, &argv);
  loop = g_main_loop_new (NULL, FALSE);

  /* Create gstreamer elements */
  /* Create Pipeline element that will form a connection of other elements */
  pipeline = gst_pipeline_new ("pipeline");
  
  /* Create nvstreammux instance to form batches from one or more sources. */
  streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");

  if (!pipeline || !streammux) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }
  
  gst_bin_add (GST_BIN (pipeline), streammux);
  
  for (i = 0; i < num_sources; i++) {
    GstPad *sinkpad, *srcpad;
    gchar pad_name[16] = { };
    GstElement *source_bin = create_source_bin (i, argv[i + 1]);

    if (!source_bin) {
      g_printerr ("Failed to create source bin. Exiting.\n");
      return -1;
    }

    gst_bin_add (GST_BIN (pipeline), source_bin);

    g_snprintf (pad_name, 15, "sink_%u", i);
    sinkpad = gst_element_get_request_pad (streammux, pad_name);
    if (!sinkpad) {
      g_printerr ("Streammux request sink pad failed. Exiting.\n");
      return -1;
    }

    srcpad = gst_element_get_static_pad (source_bin, "src");
    if (!srcpad) {
      g_printerr ("Failed to get src pad of source bin. Exiting.\n");
      return -1;
    }

    if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link source bin to stream muxer. Exiting.\n");
      return -1;
    }

    gst_object_unref (srcpad);
    gst_object_unref (sinkpad);
  }

  /* Create two nvinfer instances for the two back-to-back detectors */
  primary_detector = gst_element_factory_make ("nvinfer", "primary-nvinference-engine1");

  secondary_detector = gst_element_factory_make ("nvinfer", "primary-nvinference-engine2");

  /* Use convertor to convert from NV12 to RGBA as required by nvosd */
  nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter");

  /* Create OSD to draw on the converted RGBA buffer */
  nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");

  /*nvstreamdemux*/
  streamdemux = gst_element_factory_make("nvstreamdemux", "stream-demuxer");

  if (!primary_detector || !secondary_detector || !nvvidconv || !nvosd || !streamdemux) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

  gst_bin_add (GST_BIN (pipeline), streamdemux);

  /*Sink multifiles images for multistream*/
  for (i = 0; i < num_sources; i++) {
    GstPad *sinkpad, *srcpad;
    gchar pad_name[16] = { };
    GstElement *sink_bin = create_sink_bin (i, argv[i + 1]);

    if (!sink_bin) {
      g_printerr ("Failed to create sink bin. Exiting.\n");
      return -1;
    }

    gst_bin_add (GST_BIN (pipeline), sink_bin);

    g_snprintf (pad_name, 15, "src_%u", i); //src_0, src_1, ..., src_n;

    srcpad = gst_element_get_request_pad (streamdemux, pad_name);
    if (!srcpad) {
      g_printerr ("Streamdemux request source pad failed. Exiting.\n");
      return -1;
    }

    sinkpad = gst_element_get_static_pad (sink_bin, "sink");
    if (!sinkpad) {
      g_printerr ("Failed to get sink pad of sink bin. Exiting.\n");
      return -1;
    }

    if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link source bin to stream demuxer. Exiting.\n");
      return -1;
    }

    gst_object_unref (srcpad);
    gst_object_unref (sinkpad);
  }

  /*change batch size from 1 to num_source for mulipile streams input*/
  g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
      MUXER_OUTPUT_HEIGHT, "batch-size", num_sources,
      "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);
  g_object_set (G_OBJECT (streammux), "nvbuf-memory-type", 3, NULL);

  /* Set the config files for the two detectors. We demonstrate this by using
   * the same detector model twice but making them act as vehicle-only and
   * person-only detectors by adjusting the bbox confidence thresholds in the
   * two seperate config files. */
  g_object_set (G_OBJECT (primary_detector), "config-file-path", "primary_detector_config.txt",
          "unique-id", PRIMARY_DETECTOR_UID, NULL);

  /*Override the batch-size set in the config file primary_dectector engine with the number of sources. */
  g_object_get(G_OBJECT(primary_detector), "batch-size", &pgie_batch_size, NULL);
  if (pgie_batch_size != num_sources) {
    g_printerr("WARNING: Overriding primary detector infer-config batch-size (%d) with number of sources (%d)\n",
      pgie_batch_size, num_sources);
    g_object_set(G_OBJECT(primary_detector), "batch-size", num_sources, NULL);
  }

  g_object_set (G_OBJECT (secondary_detector), "config-file-path", "secondary_detector_config.txt",
          "unique-id", SECONDARY_DETECTOR_UID, "process-mode", SECOND_DETECTOR_IS_SECONDARY ? 2 : 1, NULL);

  /*Override the batch-size set in the config file secondary_dectector engine with the number of sources. */
  g_object_get(G_OBJECT(secondary_detector), "batch-size", &sgie_batch_size, NULL);
  if (sgie_batch_size != num_sources) {
    g_printerr("WARNING: Overriding secondary detector infer-config batch-size (%d) with number of sources (%d)\n",
      sgie_batch_size, num_sources);
    g_object_set(G_OBJECT(secondary_detector), "batch-size", num_sources, NULL);
  }
  
  /* we add a message handler */
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
  gst_object_unref (bus);

  /* Set up the pipeline */
  /* we add all elements into the pipeline */
  gst_bin_add_many (GST_BIN (pipeline), primary_detector, secondary_detector, nvvidconv, nvosd, NULL);

  /* we link the elements together */
  /* file-source -> h264-parser -> nvh264-decoder ->
   * nvinfer -> nvvidconv -> nvosd -> video-renderer */
  
  //images - delete muxer
  if (!gst_element_link_many (streammux, primary_detector, secondary_detector,
      nvvidconv, nvosd, streamdemux, NULL)) {
    g_printerr ("Elements could not be linked: streammux->streamdemux. Exiting.\n");
    return -1;
  }

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the osd element, since by that time, the buffer would have
   * had got all the metadata. */

  osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
  if (!osd_sink_pad)
    g_print ("Unable to get sink pad\n");
  else
    gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
        osd_sink_pad_buffer_probe, NULL, NULL);

  /* Set the pipeline to "playing" state */
  g_print ("Now playing: %s\n", argv[1]);
  gst_element_set_state (pipeline, GST_STATE_PLAYING);

  /* Wait till pipeline encounters an error or EOS */
  g_print ("Running...\n");
  g_main_loop_run (loop);

  /* Out of the main loop, clean up nicely */
  g_print ("Returned, stopping playback\n");
  gst_element_set_state (pipeline, GST_STATE_NULL);
  g_print ("Deleting pipeline\n");
  gst_object_unref (GST_OBJECT (pipeline));
  g_source_remove (bus_watch_id);
  g_main_loop_unref (loop);
  return 0;
}

Thank you so much!

hello!

I’m here again! I success to saving output images for each stream by change somethings at create_sink_bin function, But do not know why all image are black and have nothing, although the output of inference notice have frame and detected ok.

static GstElement *
create_sink_bin (guint index, gchar * uri)
{ 
  /*Sink output video*/

  GstElement *bin = NULL, *queue_sink = NULL, *nvvidconv_sink = NULL, 
             *filter_sink = NULL, *videoconvert = NULL, *encoder = NULL, *sink = NULL;
  GstCaps *caps_filter_sink = NULL;
  gchar bin_name[16] = { };
  gchar folder_path[50] = { };
  GstPad *pad, *ghost_pad;

  g_snprintf (bin_name, 15, "sink-bin-%02d", index);
  /* Create a source GstBin to abstract this bin's content from the rest of the
   * pipeline */
  bin = gst_bin_new (bin_name);

  queue_sink = gst_element_factory_make("queue", "queue_sink");
  nvvidconv_sink = gst_element_factory_make("nvvideoconvert", "nvvidconv_sink");
  filter_sink = gst_element_factory_make("capsfilter", "filter_sink");
  g_object_set(G_OBJECT(filter_sink), "caps", caps_filter_sink, NULL);
  gst_caps_unref(caps_filter_sink);
  videoconvert = gst_element_factory_make("videoconvert", "videoconverter");
  
  /*this encoder used to for encoder sink images file jpg*/
  encoder = gst_element_factory_make("jpegenc", "jpeg-encoder");
  
  sink = gst_element_factory_make ("multifilesink", "multifiles-renderer");

/* Source element for reading from the uri.
   * We will use decodebin and let it figure out the container format of the
   * stream and the codec and plug the appropriate demux and decode plugins. */
  //uri_decode_bin = gst_element_factory_make ("uridecodebin", "uri-decode-bin");

  if (!bin || !queue_sink || !nvvidconv_sink || !filter_sink || !videoconvert || !encoder || !sink) {
    g_printerr ("One element in sink bin could not be created.\n");
    return NULL;
  }

  g_snprintf (folder_path, 50, "/workspace/iid-%d/", index);
  strcat(folder_path, "image_%d.jpg");
  g_object_set(G_OBJECT(sink), "location", folder_path, NULL);

  gst_bin_add_many (GST_BIN (bin), queue_sink, nvvidconv_sink,
    filter_sink, videoconvert, encoder, sink, NULL);

  gst_element_link_many (queue_sink, nvvidconv_sink,
    filter_sink, videoconvert, encoder, sink, NULL);

  pad = gst_element_get_static_pad (queue_sink, "sink");
  ghost_pad = gst_ghost_pad_new ("sink", pad);
  gst_pad_set_active (ghost_pad, TRUE);

  if (!gst_element_add_pad (bin, ghost_pad)) {
    g_printerr ("Failed to add ghost pad in sink bin\n");
    return NULL;
  }
  gst_object_unref (pad);

  return bin;
}

Here is output in GST_DEBUG mode.

0:00:09.610422026  5705 0x55820a06c0a0 ERROR   default video-frame.c:175:gst_video_frame_map_id: invalid buffer size 64 < 3686400
0:00:09.610450516  5705 0x55820a06c0a0 WARN    videofilter gstvideofilter.c:293:gst_video_filter_transform:<videoconverter> warning: invalid video buffer received

pipeline.pdf (42.4 KB)

Hi! Did you solve the problem? I had the same problem.

Yeah, I do. Just follow @bcao instruction pipeline, I using demux to and worked.

I got below errors when I use udpsink or filesink for multi output. Have you every occurred?

ERROR from sink_sub_bin_encoder2: Device ‘/dev/nvhost-msenc’ failed during initialization
Debug info: gstv4l2object.c(4050): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline/GstBin:processing_bin_1/GstBin:sink_bin/GstBin:sink_sub_bin2/nvv4l2h264enc:sink_sub_bin_encoder2:
Call to S_FMT failed for YM12 @ 1920x1080: Unknown error -1

Thanks!

I have not got that error, but base on your information. Did you check this related post?

Hello @trild-vietnam,
I am also trying to add multiple filesink in the pipeline. I saw the pipeline pdf which you have shared. There you are already using demux-plugin then what was the problem you were previously facing which is mentioned on your third comment of this post.
I just want to know what changes you have done then your pipeline started working because I can see the demux-plugin in your pipeline then what was the problem.

Thanks in advance

I change position of demux-plugin after nvinfer-plugin before nvvidieoconvert-plugin because base on requirement input data type demux working on.

Thank you @trild-vietnam for the replay. I am trying to save video file and hence I am trying to change your code accordingly. Below I am giving a rough view of my pipeline.

sink-bin:
queue → nvvidconv → cap-filter(“video/x-raw, format=I420”) —> encoder(h264parse) → codecparse → qmux → sink (“filesink”)

main-pipeline:
streammux → pgie → tracker → nvdsanalytics → nvvidconv → nvosd → streamdemux

Right now I didn’t tried the code but after running it I will once update encase if I face any problem. And also can you tell how to get that pipeline pdf which you have shared.

That debugging mode of Gstreamer. You can search how it work for more options and more detail.

Thank you means running pipeline in debugging mode can create that pipeline pdf. Am I right?

1 Like

Yes. But need add function export in your code pipeline.
[FYI] Gstreamer export pipeline.

Hello,I’m trying multiple streams input&output too,I added a nvstreamdemux and another sink after nvdsosd based on deepstream-test3,like below
uridecoderbin1 \ _______________________________________________ / nvegltransform1 ->nveglglessink1
______________ nvstreammux ->nvinfer ->… ->nvdsosd ->nvstreamdemux
uridecoderbin2 / _______________________________________________ \nvegltransform2 ->nveglglessink2
and I got segmentation fault
Frame Number= 0 Number of Objects= 1 Vehicle_count= 0 Person_count= 1
Segmentation fault (core dumped)
can you share your worked pipeline.pdf?Thanks

@cy_workmail try to change nvstreamdemux after nvinfer

Hello @trild-vietnam,
I am trying to save output video instead of a image file. I tried modifying the code of sink bin but got some errors. Please can u tell me what changes has to be done if I have to save videos instead of images.
As much as I know the pipeline of sink-bin for saving video should look like-
queue -> nvvidconv -> cap-filter("video/x-raw(Memory:NVMM), format = l420") -> h264encoder(nvv4l2h264enc) -> codecparse(h264parse) -> qtmux -> filesink

Please correct me if I am wrong, I took reference from “deepstream_sink_bin.c”
Thanks

Hi @deveshdashore, please create new topic and give more details which errors you got. Maybe Mod or I can take a look on your topic.

Thanks for the replay I have already created a new topic please have a look -

I haven’t mentioned the error but I will mention it right away.

1 Like

hi,I did that and segmentation fault disappeared ,but my fps is down to 0.2 and just show picture in one nveglglessink and the other one just no picture.Do you have any good advice?