Creating custom plugin in Deepstream

Hello @CJR,
one more thing I want to add. when I tried with debug level 4 I found that the error is source-pad of filter is not able to link with sink-pad of encoder. It showed caps are incompatible.
Will changing to videoconvert solve this problem? I will try and see if the error persist.

Hello @CJR,
I tried and still got one error.

0:00:00.175920502   952 0x560bb402b400 INFO        GST_ELEMENT_PADS gstutils.c:1774:gst_element_link_pads_full: trying to link element video-converter1:(any) to element cap_filter:(any)
0:00:00.175930479   952 0x560bb402b400 INFO                GST_PADS gstutils.c:1035:gst_pad_check_link: trying to link video-converter1:src and cap_filter:sink
0:00:00.176020543   952 0x560bb402b400 INFO                GST_PADS gstpad.c:4232:gst_pad_peer_query:<cap_filter:src> pad has no peer
0:00:00.176117780   952 0x560bb402b400 INFO        GST_ELEMENT_PADS gstelement.c:920:gst_element_get_static_pad: found pad cap_filter:sink
0:00:00.176131378   952 0x560bb402b400 INFO                GST_PADS gstutils.c:1588:prepare_link_maybe_ghosting: video-converter1 and cap_filter in same bin, no need for ghost pads
0:00:00.176141603   952 0x560bb402b400 INFO                GST_PADS gstpad.c:2378:gst_pad_link_prepare: trying to link video-converter1:src and cap_filter:sink
0:00:00.176225993   952 0x560bb402b400 INFO                GST_PADS gstpad.c:4232:gst_pad_peer_query:<cap_filter:src> pad has no peer
0:00:00.176235446   952 0x560bb402b400 INFO                GST_PADS gstpad.c:2434:gst_pad_link_prepare: caps are incompatible
0:00:00.176244790   952 0x560bb402b400 INFO                GST_PADS gstpad.c:2529:gst_pad_link_full: link between video-converter1:src and cap_filter:sink failed: no common format
0:00:00.176256796   952 0x560bb402b400 INFO                GST_PADS gstutils.c:1035:gst_pad_check_link: trying to link video-converter1:src and cap_filter:sink
0:00:00.176265166   952 0x560bb402b400 INFO                GST_PADS gstpad.c:4232:gst_pad_peer_query:<cap_filter:src> pad has no peer
0:00:00.176349735   952 0x560bb402b400 INFO                GST_PADS gstpad.c:4232:gst_pad_peer_query:<cap_filter:src> pad has no peer
0:00:00.176366136   952 0x560bb402b400 INFO        GST_ELEMENT_PADS gstelement.c:920:gst_element_get_static_pad: found pad video-converter1:src
0:00:00.176375808   952 0x560bb402b400 INFO                GST_PADS gstutils.c:1588:prepare_link_maybe_ghosting: video-converter1 and cap_filter in same bin, no need for ghost pads
0:00:00.176385473   952 0x560bb402b400 INFO                GST_PADS gstpad.c:2378:gst_pad_link_prepare: trying to link video-converter1:src and cap_filter:sink
0:00:00.176468577   952 0x560bb402b400 INFO                GST_PADS gstpad.c:4232:gst_pad_peer_query:<cap_filter:src> pad has no peer
0:00:00.176477949   952 0x560bb402b400 INFO                GST_PADS gstpad.c:2434:gst_pad_link_prepare: caps are incompatible
0:00:00.176485958   952 0x560bb402b400 INFO                GST_PADS gstpad.c:2529:gst_pad_link_full: link between video-converter1:src and cap_filter:sink failed: no common format
Elements could not be linked. Exiting.

First filter and encoder was not linking and now videoconvert and filter is not linking. Am I missing something? I changed “nvvideoconvert” to “videoconvert” after osd.

Whats the type of encoder you are using ? If it’s either nvv4l2h265enc or nvv4l2h264enc then you will need to use nvvideoconvert since these encoders work on nvmm buffers. If you are using a different encoder, then you will need to use videoconvert plugin and set the cap_filter to match the required format type.

You can check the capabilities of each plugin by running gst-inspect-1.0 plugin and look at the format types supported at the sink and src pads.

I think I am using h264 encoder means do I have to change “video/x-raw” with “video/x-raw(memory:NVMM)” in

caps = gst_caps_from_string (“video/x-raw, format=I420”);
And videoconvert with nvvideoconvert

Sorry for such inconvenience.

Hello @CJR,
Now I am able to save video and all data can be seen in the video.
Thank you very much for solving this problem of mine. Before posting this issue I was not having much idea about deepstream but now I got some good understanding.
And sorry for causing trouble.

Yes, that’s right. In code snippet you have shared above, i dont see NVDS_ELEM_ENC_H264_HW being defined anywhere. So if that is pointing to either nvv4l2h265enc or nvv4l2h264enc then use "video/x-raw(memory:NVMM), format=I420"for your caps filter.

Basically you should make sure the capability of src pad of the first plugin should match the sink pad capabilities of the second plugin where the data is flowing from first plugin to the second one. You can read more about gstreamer plugin capabilities over here.

Hello @CJR, I came across some questions -

  1. If I try to run this pipeline with multiple streams then which element I have defined for individual streams and which can handle multiple streams. I think sink and the encoding part has to be individual and other can handle multiple stream.
    Please can you tell if I have to run multiple stream which code I can refer.

  2. As much I know about dsanalytics plugin, it can count when some cross line by +1 but it can’t subtract if someone move in opposite direction. For this I thought of creating two line with same coordinates and giving opposite direction and then creating simple custom plugin which can parse buffer from dsanalytics source pad and do this subtraction. Please can you refer alternative solution or something to create such plugin.

Thanks in advance.

Hi,

  1. Please refer to deepstream-app common sources where these features have been implemented. You can refer to deepstream_sink_bin.c and deepstream_src_bin.c specifically.

  2. Yes, you’re right. But you dont need a plugin to aggregate the information from analytics plugin. You can do that in a probe which is attached downstream to the analytics plugin.

Hello @CJR,
I am thinking of adding nvdsanalytics plugin in “deepstream-apps common”. I want to ask can I add probe for printing and changing buffer data regarding nvdsanalytics similar to what is implemented on “deepstream-nvdsanalytics-test”.

If yes where can I add the code of attaching probe.

Thank you in advance.

You can add it to the src pad of the analytics plugin so that the analytics metadata is available in the GstBuffer for you to aggregate it.

Thank you @CJR, I think there is a bit of misunderstanding. I want to know in which file do I have to add the probe means in “deepstream_dsanalytics.c”(this file I will add by referencing dsexample in /apps-common/src) or in “deepstream_osd_bin.c”.

Because I didn’t saw any probe attached on source files in “/apps-common/src”.

The files in /apps-common/src is the shared code between multiple apps which use same components for building the pipeline. You can take a look at the deepstream_app.c file which already has a few implementations of probes.

Thank you @CJR, one more thing I want to ask this file “deepstream_app.c” is present in apps/common-apps/deepstream-app. may I right?

Hello @CJR,
I am trying to edit and create source files to incorporate “nvdsanalytics plugin” in sample application. I will list those file name which has been edited and there path below so that you can point out if any file is left which needs to be updated.

1. deepstream_config_file_parser.h  -  Modified  -  "/apps-common/includes"
2. deepstream_config.h              -  Modified  -  "/apps-common/includes"
3. deepstream_dsanalytics.h         -  Created   -  "/apps-common/includes"
4. deepstream_config_file_parser.c  -  Modified  -  "/apps-common/src"
5. deepstream_dsanalytics.c         -  Created   -  "/apps-common/src"
6. deepstream_app_config_parser.c   -  Modified  -  "/sample-apps/deepstream-app"
7. deepstream_app.h                 -  Modified  -  "/sample-apps/deepstream-app"

Right now I am facing problem while editing “deepstream_app.c” which is in “sample-apps/deepstream-app/”.
I mostly took reference of tracker-plugin and dsexample-plugin to edit the code in above mentioned files but in “deepstream_app.c” they are writtern very differently and now I am confused, in which way should I write nvdsanalytics-plugin in "deepstream_app.c.

Please help

Hello @CJR,
Sorry for rapid questioning, I tried adding sink bin in nvdsanalytics-test for creating multiple file-sink but faced some error related to v4l2. I am showing my pipeline below please have a look.

sink-bin:
queue → nvvidconv → cap-filter(“video/x-raw, format=I420”) —> encoder(h264parse) → codecparse → qtmux → sink (“filesink”)

main-pipeline:
streammux → pgie → tracker → nvdsanalytics → nvvidconv → nvosd → streamdemux

Error:
 0:00:03.155740905  7367 0x7f3adc07a370 WARN            v4l2videodec gstv4l2videodec.c:1609:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder0> Duration invalid, not setting latency
> 0:00:03.155782188  7367 0x7f3adc07a370 WARN          v4l2bufferpool gstv4l2bufferpool.c:1057:gst_v4l2_buffer_pool_start:<nvv4l2decoder0:pool:src> Uncertain or not enough buffers, enabling copy threshold
> 0:00:03.158427052  7367 0x7f3ad000c0f0 WARN          v4l2bufferpool gstv4l2bufferpool.c:1535:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder0:pool:src> Driver should never set v4l2_buffer.field to ANY

Thanks in advance

Hello @CJR,
I want to update the error which I am facing and I am also attaching the code I am running please have a look.

updated error : 
0:00:07.853592760   345 0x7f4610011cf0 ERROR                   v4l2 gstv4l2object.c:2074:gst_v4l2_object_get_interlace_mode: Driver bug detected - check driver with v4l2-compliance from http://git.linuxtv.org/v4l-utils.git


Code for multiple file-sink : 
#include <gst/gst.h>
#include <glib.h>
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <sys/time.h>
#include <iostream>
#include <vector>
#include <unordered_map>
#include "gstnvdsmeta.h"
#include "nvds_analytics_meta.h"
#include "deepstream_config.h"
#ifndef PLATFORM_TEGRA
#include "gst-nvmessage.h"
#endif

#define MAX_DISPLAY_LEN 64

#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2

/* The muxer output resolution must be set if the input streams will be of
 * different resolution. The muxer will scale all the input frames to this
 * resolution. */
#define MUXER_OUTPUT_WIDTH 1920
#define MUXER_OUTPUT_HEIGHT 1080

/* Muxer batch formation timeout, for e.g. 40 millisec. Should ideally be set
 * based on the fastest source's framerate. */
#define MUXER_BATCH_TIMEOUT_USEC 40000

#define TILED_OUTPUT_WIDTH 1920
#define TILED_OUTPUT_HEIGHT 1080

/* NVIDIA Decoder source pad memory feature. This feature signifies that source
 * pads having this capability will push GstBuffers containing cuda buffers. */
#define GST_CAPS_FEATURES_NVMM "memory:NVMM"

gchar pgie_classes_str[4][32] = { "Vehicle", "TwoWheeler", "Person",
  "RoadSign"
};


/* nvdsanalytics_src_pad_buffer_probe  will extract metadata received on tiler sink pad
 * and extract nvanalytics metadata etc. */
static GstPadProbeReturn
nvdsanalytics_src_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
    GstBuffer *buf = (GstBuffer *) info->data;
    guint num_rects = 0;
    NvDsObjectMeta *obj_meta = NULL;
    guint vehicle_count = 0;
    guint person_count = 0;
    NvDsMetaList * l_frame = NULL;
    NvDsMetaList * l_obj = NULL;
    guint lc_count = 0;
    guint roi_count = 0;
    bool overcrowding = false;

    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
        for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
                l_obj = l_obj->next) {
            obj_meta = (NvDsObjectMeta *) (l_obj->data);
            if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) {
                vehicle_count++;
                num_rects++;
            }
            if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) {
                person_count++;
                num_rects++;
            }

            // Access attached user meta for each object
            for (NvDsMetaList *l_user_meta = obj_meta->obj_user_meta_list; l_user_meta != NULL;
                    l_user_meta = l_user_meta->next) {
                NvDsUserMeta *user_meta = (NvDsUserMeta *) (l_user_meta->data);
                if(user_meta->base_meta.meta_type == NVDS_USER_OBJ_META_NVDSANALYTICS)
                {
                    NvDsAnalyticsObjInfo * user_meta_data = (NvDsAnalyticsObjInfo *)user_meta->user_meta_data;
                    if (user_meta_data->dirStatus.length()){
                        g_print ("object %lu moving in %s\n", obj_meta->object_id, user_meta_data->dirStatus.c_str());
                    }
                }
            }
        }
        roi_count = 0;
        lc_count = 0;
        overcrowding = false;

        /* Iterate user metadata in frames to search analytics metadata */
        for (NvDsMetaList * l_user = frame_meta->frame_user_meta_list;
                l_user != NULL; l_user = l_user->next) {
            NvDsUserMeta *user_meta = (NvDsUserMeta *) l_user->data;
            if (user_meta->base_meta.meta_type != NVDS_USER_FRAME_META_NVDSANALYTICS)
                continue;

            /* convert to  metadata */
            NvDsAnalyticsFrameMeta *meta =
                (NvDsAnalyticsFrameMeta *) user_meta->user_meta_data;
            /* Get the labels from nvdsanalytics config file */
            roi_count = meta->objInROIcnt["RF"];
            lc_count = meta->objLCCumCnt["Exit"];
            overcrowding = meta->ocStatus["OC"];
        }
        g_print ("Frame Number = %d of Stream = %d, Number of objects = %d "
                "Vehicle Count = %d Person Count = %d Objs in ROI = %d LC count = %d Overcrowding = %d\n",
            frame_meta->frame_num, frame_meta->pad_index,
            num_rects, vehicle_count, person_count, roi_count, lc_count,overcrowding);
    }
    return GST_PAD_PROBE_OK;
}


static gboolean
bus_call (GstBus * bus, GstMessage * msg, gpointer data)
{
  GMainLoop *loop = (GMainLoop *) data;
  switch (GST_MESSAGE_TYPE (msg)) {
    case GST_MESSAGE_EOS:
      g_print ("End of stream\n");
      g_main_loop_quit (loop);
      break;
    case GST_MESSAGE_WARNING:
    {
      gchar *debug;
      GError *error;
      gst_message_parse_warning (msg, &error, &debug);
      g_printerr ("WARNING from element %s: %s\n",
          GST_OBJECT_NAME (msg->src), error->message);
      g_free (debug);
      g_printerr ("Warning: %s\n", error->message);
      g_error_free (error);
      break;
    }
    case GST_MESSAGE_ERROR:
    {
      gchar *debug;
      GError *error;
      gst_message_parse_error (msg, &error, &debug);
      g_printerr ("ERROR from element %s: %s\n",
          GST_OBJECT_NAME (msg->src), error->message);
      if (debug)
        g_printerr ("Error details: %s\n", debug);
      g_free (debug);
      g_error_free (error);
      g_main_loop_quit (loop);
      break;
    }
#ifndef PLATFORM_TEGRA
    case GST_MESSAGE_ELEMENT:
    {
      if (gst_nvmessage_is_stream_eos (msg)) {
        guint stream_id;
        if (gst_nvmessage_parse_stream_eos (msg, &stream_id)) {
          g_print ("Got EOS from stream %d\n", stream_id);
        }
      }
      break;
    }
#endif
    default:
      break;
  }
  return TRUE;
}

static void
cb_newpad (GstElement * decodebin, GstPad * decoder_src_pad, gpointer data)
{
  g_print ("In cb_newpad\n");
  GstCaps *caps = gst_pad_get_current_caps (decoder_src_pad);
  const GstStructure *str = gst_caps_get_structure (caps, 0);
  const gchar *name = gst_structure_get_name (str);
  GstElement *source_bin = (GstElement *) data;
  GstCapsFeatures *features = gst_caps_get_features (caps, 0);

  /* Need to check if the pad created by the decodebin is for video and not
   * audio. */
  if (!strncmp (name, "video", 5)) {
    /* Link the decodebin pad only if decodebin has picked nvidia
     * decoder plugin nvdec_*. We do this by checking if the pad caps contain
     * NVMM memory features. */
    if (gst_caps_features_contains (features, GST_CAPS_FEATURES_NVMM)) {
      /* Get the source bin ghost pad */
      GstPad *bin_ghost_pad = gst_element_get_static_pad (source_bin, "src");
      if (!gst_ghost_pad_set_target (GST_GHOST_PAD (bin_ghost_pad),
              decoder_src_pad)) {
        g_printerr ("Failed to link decoder src pad to source bin ghost pad\n");
      }
      gst_object_unref (bin_ghost_pad);
    } else {
      g_printerr ("Error: Decodebin did not pick nvidia decoder plugin.\n");
    }
  }
}

static void
decodebin_child_added (GstChildProxy * child_proxy, GObject * object,
    gchar * name, gpointer user_data)
{
  g_print ("Decodebin child added: %s\n", name);
  if (g_strrstr (name, "decodebin") == name) {
    g_signal_connect (G_OBJECT (object), "child-added",
        G_CALLBACK (decodebin_child_added), user_data);
  }
}

static GstElement *
create_source_bin (guint index, gchar * uri)
{
  GstElement *bin = NULL, *uri_decode_bin = NULL;
  gchar bin_name[16] = { };

  g_snprintf (bin_name, 15, "source-bin-%02d", index);
  /* Create a source GstBin to abstract this bin's content from the rest of the
   * pipeline */
  bin = gst_bin_new (bin_name);

  /* Source element for reading from the uri.
   * We will use decodebin and let it figure out the container format of the
   * stream and the codec and plug the appropriate demux and decode plugins. */
  uri_decode_bin = gst_element_factory_make ("uridecodebin", "uri-decode-bin");

  if (!bin || !uri_decode_bin) {
    g_printerr ("One element in source bin could not be created.\n");
    return NULL;
  }

  /* We set the input uri to the source element */
  g_object_set (G_OBJECT (uri_decode_bin), "uri", uri, NULL);

  /* Connect to the "pad-added" signal of the decodebin which generates a
   * callback once a new pad for raw data has beed created by the decodebin */
  g_signal_connect (G_OBJECT (uri_decode_bin), "pad-added",
      G_CALLBACK (cb_newpad), bin);
  g_signal_connect (G_OBJECT (uri_decode_bin), "child-added",
      G_CALLBACK (decodebin_child_added), bin);

  gst_bin_add (GST_BIN (bin), uri_decode_bin);

  /* We need to create a ghost pad for the source bin which will act as a proxy
   * for the video decoder src pad. The ghost pad will not have a target right
   * now. Once the decode bin creates the video decoder and generates the
   * cb_newpad callback, we will set the ghost pad target to the video decoder
   * src pad. */
  if (!gst_element_add_pad (bin, gst_ghost_pad_new_no_target ("src",
              GST_PAD_SRC))) {
    g_printerr ("Failed to add ghost pad in source bin\n");
    return NULL;
  }

  return bin;
}

/* This code is added for incorporating multiple file-sink in this pipeline */
static GstElement *
create_sink_bin (guint index, gchar * uri)
{

  GstElement *bin = NULL, *queue_sink = NULL, *nvvidconv_sink = NULL,
             *filter_sink = NULL,*codecparse = NULL,*mux = NULL, *videoconvert = NULL, *encoder = NULL, *sink = NULL;
  GstCaps *caps_filter_sink = NULL;
  gchar bin_name[16] = { };
  gchar folder_path[50] = { };
  GstPad *pad, *ghost_pad;
  gulong bitrate = 2000000;
  guint profile = 0;

  g_snprintf (bin_name, 15, "sink-bin-%02d", index);
  /* Create a source GstBin to abstract this bin's content from the rest of the
   * pipeline */
  bin = gst_bin_new (bin_name);

  queue_sink = gst_element_factory_make("queue", "queue_sink");

  nvvidconv_sink = gst_element_factory_make("nvvideoconvert", "nvvidconv_sink");

  filter_sink = gst_element_factory_make("capsfilter", "filter_sink");

  caps_filter_sink = gst_caps_from_string ("video/x-raw(memory:NVMM), format=I420"); //filter
  g_object_set(G_OBJECT(filter_sink), "caps", caps_filter_sink, NULL);

  gst_caps_unref(caps_filter_sink);
  videoconvert = gst_element_factory_make("videoconvert", "videoconverter");

  /*this encoder used to for encoder sink images file*/
  encoder = gst_element_factory_make("nvv4l2h264enc", "h264-encoder");
  g_object_set (G_OBJECT (encoder), "profile", profile, NULL);
  g_object_set (G_OBJECT (encoder), "bitrate", bitrate, NULL);

  codecparse = gst_element_factory_make ("h264parse", "h264-parser");

  mux = gst_element_factory_make ("qtmux", "mux");

  sink = gst_element_factory_make ("filesink", "filesink");

  /* Source element for reading from the uri.
   * We will use decodebin and let it figure out the container format of the
   * stream and the codec and plug the appropriate demux and decode plugins. */
  //uri_decode_bin = gst_element_factory_make ("uridecodebin", "uri-decode-bin");

  if (!bin || !queue_sink || !nvvidconv_sink || !filter_sink || !encoder || !codecparse || !mux || !sink) {
    g_printerr ("One element in sink bin could not be created.\n");
    return NULL;
  }

  g_snprintf (folder_path, 50, "iid-%d-", index);
  strcat(folder_path, "video.mp4");
  g_object_set(G_OBJECT(sink), "location", folder_path, NULL);

  gst_bin_add_many (GST_BIN (bin), queue_sink, nvvidconv_sink,
    filter_sink, encoder, codecparse, mux, sink, NULL);

  gst_element_link_many (queue_sink, nvvidconv_sink,
    filter_sink, encoder, codecparse, mux, sink, NULL);

  pad = gst_element_get_static_pad (queue_sink, "sink");
  ghost_pad = gst_ghost_pad_new ("sink", pad);
  gst_pad_set_active (ghost_pad, TRUE);

  if (!gst_element_add_pad (bin, ghost_pad)) {
    g_printerr ("Failed to add ghost pad in sink bin\n");
    return NULL;
  }
  gst_object_unref (pad);

  return bin;
}

int
main (int argc, char *argv[])
{
  GMainLoop *loop = NULL;
  GstElement *pipeline = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL,
             *nvtracker = NULL, *nvdsanalytics = NULL, *streamdemux = NULL,
      *nvvidconv = NULL, *nvosd = NULL, *tiler = NULL;
#ifdef PLATFORM_TEGRA
  GstElement *transform = NULL;
#endif
  GstBus *bus = NULL;
  guint bus_watch_id;
  GstPad *nvdsanalytics_src_pad = NULL;
  guint i, num_sources;
  guint tiler_rows, tiler_columns;
  guint pgie_batch_size;

  /* Check input arguments */
  if (argc < 2) {
    g_printerr ("Usage: %s <uri1> [uri2] ... [uriN] \n", argv[0]);
    return -1;
  }
  num_sources = argc - 1;

  /* Standard GStreamer initialization */
  gst_init (&argc, &argv);
  loop = g_main_loop_new (NULL, FALSE);

  /* Create gstreamer elements */
  /* Create Pipeline element that will form a connection of other elements */
  pipeline = gst_pipeline_new ("pipeline");

  /* Create nvstreammux instance to form batches from one or more sources. */
  streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");

  if (!pipeline || !streammux) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }
  gst_bin_add (GST_BIN (pipeline), streammux);

  for (i = 0; i < num_sources; i++) {
    GstPad *sinkpad, *srcpad;
    gchar pad_name[16] = { };
    GstElement *source_bin = create_source_bin (i, argv[i + 1]);

    if (!source_bin) {
      g_printerr ("Failed to create source bin. Exiting.\n");
      return -1;
    }

    gst_bin_add (GST_BIN (pipeline), source_bin);

    g_snprintf (pad_name, 15, "sink_%u", i);
    sinkpad = gst_element_get_request_pad (streammux, pad_name);
    if (!sinkpad) {
      g_printerr ("Streammux request sink pad failed. Exiting.\n");
      return -1;
    }

    srcpad = gst_element_get_static_pad (source_bin, "src");
    if (!srcpad) {
      g_printerr ("Failed to get src pad of source bin. Exiting.\n");
      return -1;
    }

    if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link source bin to stream muxer. Exiting.\n");
      return -1;
    }

    gst_object_unref (srcpad);
    gst_object_unref (sinkpad);
  }

  /* Use nvinfer to infer on batched frame. */
  pgie = gst_element_factory_make ("nvinfer", "primary-nvinference-engine");

  /* Use nvtracker to track detections on batched frame. */
  nvtracker = gst_element_factory_make ("nvtracker", "nvtracker");

  /* Use nvdsanalytics to perform analytics on object */
  nvdsanalytics = gst_element_factory_make ("nvdsanalytics", "nvdsanalytics");

  /* Use nvtiler to composite the batched frames into a 2D tiled array based
   * on the source of the frames. */
  tiler = gst_element_factory_make ("nvmultistreamtiler", "nvtiler");

  /* Use convertor to convert from NV12 to RGBA as required by nvosd */
  nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter");

  /* Create OSD to draw on the converted RGBA buffer */
  nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");

  /* Adding demuxer to brake buffer for multiple sink */
  streamdemux = gst_element_factory_make("nvstreamdemux", "stream-demuxer");

  /* Finally render the osd output */
#ifdef PLATFORM_TEGRA
  transform = gst_element_factory_make ("nvegltransform", "nvegl-transform");
#endif
  //sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");

  if (!pgie || !nvtracker || !nvdsanalytics || !tiler || !nvvidconv ||
      !nvosd || !streamdemux) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

  gst_bin_add (GST_BIN (pipeline), streamdemux);

  /*Sink multifiles video for multistream*/
  for (i = 0; i < num_sources; i++) {
    GstPad *sinkpad, *srcpad;
    gchar pad_name[16] = { };
    GstElement *sink_bin = create_sink_bin (i, argv[i + 1]);

    if (!sink_bin) {
      g_printerr ("Failed to create sink bin. Exiting.\n");
      return -1;
    }

    gst_bin_add (GST_BIN (pipeline), sink_bin);

    g_snprintf (pad_name, 15, "src_%u", i); //src_0, src_1, ..., src_n;

    srcpad = gst_element_get_request_pad (streamdemux, pad_name);
    if (!srcpad) {
      g_printerr ("Streamdemux request source pad failed. Exiting.\n");
      return -1;
    }

    sinkpad = gst_element_get_static_pad (sink_bin, "sink");
    if (!sinkpad) {
      g_printerr ("Failed to get sink pad of sink bin. Exiting.\n");
      return -1;
    }

    if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link source bin to stream demuxer. Exiting.\n");
      return -1;
    }

    gst_object_unref (srcpad);
    gst_object_unref (sinkpad);
  }

#ifdef PLATFORM_TEGRA
  if(!transform) {
    g_printerr ("One tegra element could not be created. Exiting.\n");
    return -1;
  }
#endif

  g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
      MUXER_OUTPUT_HEIGHT, "batch-size", num_sources,
      "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);

  /* Configure the nvinfer element using the nvinfer config file. */
  g_object_set (G_OBJECT (pgie),
      "config-file-path", "nvdsanalytics_pgie_config.txt", NULL);

  /* Configure the nvtracker element for using the particular tracker algorithm. */
  g_object_set (G_OBJECT (nvtracker),
      "ll-lib-file", "/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so",
      "ll-config-file", "tracker_config.yml", "tracker-width", 640, "tracker-height", 480,
       NULL);

  /* Configure the nvdsanalytics element for using the particular analytics config file*/
  g_object_set (G_OBJECT (nvdsanalytics),
      "config-file", "config_nvdsanalytics.txt",
       NULL);

  /* Override the batch-size set in the config file with the number of sources. */
  g_object_get (G_OBJECT (pgie), "batch-size", &pgie_batch_size, NULL);
  if (pgie_batch_size != num_sources) {
    g_printerr
        ("WARNING: Overriding infer-config batch-size (%d) with number of sources (%d)\n",
        pgie_batch_size, num_sources);
    g_object_set (G_OBJECT (pgie), "batch-size", num_sources, NULL);
  }

  tiler_rows = (guint) sqrt (num_sources);
  tiler_columns = (guint) ceil (1.0 * num_sources / tiler_rows);
  /* we set the tiler properties here */
  g_object_set (G_OBJECT (tiler), "rows", tiler_rows, "columns", tiler_columns,
      "width", TILED_OUTPUT_WIDTH, "height", TILED_OUTPUT_HEIGHT, NULL);

  /* we add a message handler */
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
  gst_object_unref (bus);

  /* Set up the pipeline */
  /* we add all elements into the pipeline */
#ifdef PLATFORM_TEGRA
  gst_bin_add_many (GST_BIN (pipeline), pgie, nvtracker, nvdsanalytics,
          nvvidconv, nvosd, NULL);

  /* we link the elements together
   * nvstreammux -> nvinfer -> nvtracker -> nvdsanalytics -> nvtiler ->
   * nvvideoconvert -> nvosd -> transform -> sink
   */
  if (!gst_element_link_many (streammux, pgie, nvtracker, nvdsanalytics,
                              nvvidconv, nvosd, streamdemux, NULL)) {
    g_printerr ("Elements could not be linked. Exiting.\n");
    return -1;
  }
#else
  gst_bin_add_many (GST_BIN (pipeline), pgie, nvtracker, nvdsanalytics,
                    nvvidconv, nvosd, NULL);
  /* we link the elements together
   * nvstreammux -> nvinfer -> nvtracker -> nvdsanalytics -> nvtiler ->
   * nvvideoconvert -> nvosd -> sink
   */
  if (!gst_element_link_many (streammux, pgie, nvtracker, nvdsanalytics,
       nvvidconv, nvosd, streamdemux, NULL)) {
    g_printerr ("Elements could not be linked. Exiting.\n");
    return -1;
  }
#endif

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the nvdsanalytics element, since by that time, the buffer
   * would have had got all the metadata.
   */
  nvdsanalytics_src_pad = gst_element_get_static_pad (nvdsanalytics, "src");
  if (!nvdsanalytics_src_pad)
    g_print ("Unable to get src pad\n");
  else
    gst_pad_add_probe (nvdsanalytics_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
        nvdsanalytics_src_pad_buffer_probe, NULL, NULL);

  /* Set the pipeline to "playing" state */
  g_print ("Now playing:");
  for (i = 0; i < num_sources; i++) {
    g_print (" %s,", argv[i + 1]);
  }
  g_print ("\n");
  gst_element_set_state (pipeline, GST_STATE_PLAYING);

  /* Wait till pipeline encounters an error or EOS */
  g_print ("Running...\n");
  g_main_loop_run (loop);

  /* Out of the main loop, clean up nicely */
  g_print ("Returned, stopping playback\n");
  gst_element_set_state (pipeline, GST_STATE_NULL);
  g_print ("Deleting pipeline\n");
  gst_object_unref (GST_OBJECT (pipeline));
  g_source_remove (bus_watch_id);
  g_main_loop_unref (loop);
  return 0;
}

Please have a look.