Creating custom plugin in Deepstream

Hello everyone,
I am trying to build a custom plugin for “object counter” that means if some object is crossing a line of given coordinate it should get counted. I am using Yolov3 based inference as provided with Deepstream SDK docker container in "source directory " and KLT tracker for tracking the object.
I am following this post - [https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide/deepstream_custom_plugin.html#]. This post provide basic information about gst-dsexample plugin.

I am fairly new to Deepstream and video analytics in general. I didn’t got much information with the above post I have shared. I am not able to find a good procedure to follow for writing my custom plugin or any method to do my project.

At the end, I want my custom plugin or any other method to perform following task-

  1. To be able to extract required information from the buffer like bounding box
    coordinates and their unique tracking ID.
  2. To be able to count the object crossing the line and update the count and line
    coordinate in the buffer which can be shown in output stream.
    I already have the code in python for creating line counter algorithm but I don’t know how to implement that in Deepstream.

Please help me and let me know if any other information is required from my side to solve the issue.
• Hardware Platform GPU (Tesla K20Xm)
• DeepStream Version 4.0.2-19.12
• TensorRT Version 6.0.1
• NVIDIA GPU Driver Version-440.33.01
Thanks in advance.

Hi,

Line crossing has already been implemented in dsanalytics plugin, could you please take a look at that ?

Here’s the documentation link.

Thank you @CJR for the help. I will definitely look into it and will come back to you.

Hello @CJR,
I have gone through the documentation which you have shared and definitely this is exactly what I was looking for. But I am not able to use the plugin. I tried two methods for using dsanalytics plugin for my use-case and that is -

  1. I tried running Deepstream-nvdsanalytics-test in “/source/apps/sample-apps” by following README. But since I am using server and Deepstream docker container I was not able to run the test as I think it contains Eglsink and tiled display and I don’t know how to change it.

  2. I tried copying context in config_nvdsanalytics.txt from “/source/apps/sampe-app/deepstream-nvdsanalytics-test” into one of the config file in “/sample/config/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt”. I disabled tiled display and changed sink type to File in config file, so that I can save the video but I didn’t got any output in the video regarding dsanalytics. Previously I tried same method with ds-example plugin and that was working fine.

I want to integrate tracker and dsanalytics plugin with Yolov3 config file given in “/source/objectdetection_Yolo”. I was successfully able to integrate tracker by adding details in Yolov3 config file but I don’t know how to integrate dsanalytics in the same way as I tried same thing in method two mentioned above.

Please help me in understanding the correct way to use dsanalytics plugin.

Hello @CJR,
I have tried running the deepstream_nvdsanalytcis_test using gst-launch command mentioned in the documentation. I have changed paths of config files to desired path and also changed egl sink to file sink (filesink location=capture.mp4). I got the capture.mp4 file but it was empty nothing got written.
The command for gst-launch I have used is -
“gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 live-source=0 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvtracker ll-lib-file = /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so ll-config-file=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/tracker_config.yml tracker-width=640 tracker-height=384 ! nvdsanalytics config-file=config_nvdsanalytics.txt ! nvmultistreamtiler ! nvvideoconvert ! nvdsosd ! filesink location=capture.mp4”.

Please help me to solve this issue.

Hello everyone,
Please can anyone help me with my problem. It’s been very long still I am not able to solve my issue.
Please help.

Hi,

Apologies for the delay in response. You can change the type of sink in the Deepstream-nvdsanalytics-test by changing
sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");
TO
sink = gst_element_factory_make ("filesink", "filesink");
g_object_set (G_OBJECT (sink), "location", capture.mp4, NULL);

Hello @CJR,
Thank you very much now I am getting details like direction and line crossing getting printed in the terminal. But the video I am saving is empty nothing got written in it.
One more thing I want to ask is, how can I show details like lines, counts, regions, etc. directly on the frame like bounding-box and tracker ID shown. Do I have to change code in OSD sink or there is much easier way to do it.
Again thank you for helping me.

Sorry you’ll need a few more changes that just switching to filesink.

You will need to connect osd->nvideoconvvert->caps filter(x/raw)->encoder->codecparse->mux->filesink.

You can refer to create_encode_file_bin function in deepstream_sink_bin.c where a similar snippet has been implemented.

Any info you want to be displayed can be added to the osd metadata before the buffer enters the OSD’s sink pad. Typically this would be done in a probe. In the nvdsanalytics-test sample, you can refer to nvdsanalytics_src_pad_buffer_probe function.

Hello @CJR,
Thank you for the response. I went through create_encode_file_bin function but I find it difficult to understand since after creating each element the function is taking details from config for setting elements.

Is there any easy way to do it like attaching nvdsanalytics plugin with sample apps in samples/config/deepstream-app. There, it is easy to modify parameters using config file.

We will be adding dsanalytics plugin support in deepstream-app in the next release. Since the sources are already available to you, you can do it yourself as well. If you check how ds-example plugin is added in the deepstream-app sources. There are comments in the code explaining what needs to be done. You can start by viewing create_dsexample_bin function in deepstream-app sources and make similar changes for dsanalytics plugin. The approach i suggested in my previous answer would be easier to implement, but less flexible since dsanalytics-test app does not have a config file.

I will go for the easier option, since I fairly new to the deepstream. I will try out the previous suggestion and let you know in case of any doubt.
Thank you.

Hello @CJR, sorry for causing trouble but I have one question. I just simply have to connect extra elements with other elements. Hence I don’t have to use bin I just have to connect all elements like-

pgie->nvtracker->nvdsanalytics->tiler->nvvidconv->nvosd->nvideoconvvert->caps filter(x/raw)->encoder->codecparse->mux->filesink

please correct me if I am wrong.

you’re right

Thank you for your conformation. I will try it right away. one more thing do we have to use nvvidconv both before and after nvosd.

Yes, we do. OSD needs the input to be in RGBA format while the encoders accept I420/NV12 formats.

Hello @CJR,
I tried creating pipeline as you suggested. I was able to resolve some errors but errors are difficult to understand. I am posting my error and the code.

My error is -

(deepstream-nvdsanalytics-test:203): GStreamer-WARNING **: 16:08:13.115: Name ‘nvvideo-converter’ is not unique in bin ‘nvdsanalytics-test-pipeline’, not adding

(deepstream-nvdsanalytics-test:203): GStreamer-CRITICAL **: 16:08:13.116: gst_element_link_pads_full: assertion ‘GST_IS_ELEMENT (dest)’ failed
Elements could not be linked. Exiting.

My code is-

#include <gst/gst.h>
    #include <glib.h>
    #include <stdio.h>
    #include <math.h>
    #include <string.h>
    #include <sys/time.h>
    #include <iostream>
    #include <vector>
    #include <unordered_map>
    #include "gstnvdsmeta.h"
    #include "nvds_analytics_meta.h"
    #include "deepstream_config.h"
    #ifndef PLATFORM_TEGRA
    #include "gst-nvmessage.h"
    #endif

    [....]

    int
    main (int argc, char *argv[])
    {
      GMainLoop *loop = NULL;
      GstElement *pipeline = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL,
                 *nvtracker = NULL, *nvdsanalytics = NULL,
          *nvvidconv = NULL, *nvosd = NULL, *nvvidconv1 = NULL, *transform1 = NULL, *cap_filter = NULL, *encoder = NULL, *codecparse = NULL, *mux = NULL, *tiler = NULL;
      GstCaps *caps = NULL;

    #ifdef PLATFORM_TEGRA
      GstElement *transform = NULL;
    #endif
      GstBus *bus = NULL;
      guint bus_watch_id;
      GstPad *nvdsanalytics_src_pad = NULL;
      guint i, num_sources;
      guint tiler_rows, tiler_columns;
      guint pgie_batch_size;
      gulong bitrate = 2000000;
      guint profile = 0;

      /* Check input arguments */
      if (argc < 2) {
        g_printerr ("Usage: %s <uri1> [uri2] ... [uriN] \n", argv[0]);
        return -1;
      }
      num_sources = argc - 1;

      /* Standard GStreamer initialization */
      gst_init (&argc, &argv);
      loop = g_main_loop_new (NULL, FALSE);

      /* Create gstreamer elements */
      /* Create Pipeline element that will form a connection of other elements */
      pipeline = gst_pipeline_new ("nvdsanalytics-test-pipeline");

      /* Create nvstreammux instance to form batches from one or more sources. */
      streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");

      if (!pipeline || !streammux) {
        g_printerr ("One element could not be created. Exiting.\n");
        return -1;
      }
      gst_bin_add (GST_BIN (pipeline), streammux);

      for (i = 0; i < num_sources; i++) {
        GstPad *sinkpad, *srcpad;
        gchar pad_name[16] = { };
        GstElement *source_bin = create_source_bin (i, argv[i + 1]);

        if (!source_bin) {
          g_printerr ("Failed to create source bin. Exiting.\n");
          return -1;
        }

        gst_bin_add (GST_BIN (pipeline), source_bin);

        g_snprintf (pad_name, 15, "sink_%u", i);
        sinkpad = gst_element_get_request_pad (streammux, pad_name);
        if (!sinkpad) {
          g_printerr ("Streammux request sink pad failed. Exiting.\n");
          return -1;
        }

        srcpad = gst_element_get_static_pad (source_bin, "src");
        if (!srcpad) {
          g_printerr ("Failed to get src pad of source bin. Exiting.\n");
          return -1;
        }

        if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
          g_printerr ("Failed to link source bin to stream muxer. Exiting.\n");
          return -1;
        }

        gst_object_unref (srcpad);
        gst_object_unref (sinkpad);
      }

      /* Use nvinfer to infer on batched frame. */
      pgie = gst_element_factory_make ("nvinfer", "primary-nvinference-engine");

      /* Use nvtracker to track detections on batched frame. */
      nvtracker = gst_element_factory_make ("nvtracker", "nvtracker");

      /* Use nvdsanalytics to perform analytics on object */
      nvdsanalytics = gst_element_factory_make ("nvdsanalytics", "nvdsanalytics");

      /* Use nvtiler to composite the batched frames into a 2D tiled array based
       * on the source of the frames. */
      tiler = gst_element_factory_make ("nvmultistreamtiler", "nvtiler");

      /* Use convertor to convert from NV12 to RGBA as required by nvosd */
      nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter");
      if (!nvvidconv) {
        g_printerr ("nvvdiconv element could not be created. Exiting.\n");
      }

      /* Create OSD to draw on the converted RGBA buffer */
      nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");
      if (!nvosd) {
        g_printerr ("nvosd element could not be created. Exiting.\n");
      }

      /* converter to convert RGBA to NV12 */
      nvvidconv1 = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter1");
      if (!nvvidconv1) {
        g_printerr ("nvvidconv1 element could not be created. Exiting.\n");
      }
      /*create cap_filter */
      cap_filter = gst_element_factory_make (NVDS_ELEM_CAPS_FILTER, "cap_filter");
      if (!cap_filter) {
        g_printerr ("cap_filter element could not be created. Exiting.\n");
      }

      /* create cap for filter */
      caps = gst_caps_from_string ("video/x-raw, format=I420");
      g_object_set (G_OBJECT (cap_filter), "caps", caps, NULL);

      /* creatge encoder*/
      encoder = gst_element_factory_make (NVDS_ELEM_ENC_H264_HW, "encoder");
      if (!encoder) {
        g_printerr ("encoder element could not be created. Exiting.\n");
      }

      /* create transform1 */
      transform1 = gst_element_factory_make (NVDS_ELEM_VIDEO_CONV, "transform1");
      g_object_set (G_OBJECT (transform1), "gpu-id", 0, NULL);
      if (!transform1) {
        g_printerr ("transform1 element could not be created. Exiting.\n");
      }

      #ifdef IS_TEGRA
        g_object_set (G_OBJECT (encoder), "bufapi-version", 1, NULL);
      #endif

      g_object_set (G_OBJECT (encoder), "profile", profile, NULL);
      g_object_set (G_OBJECT (encoder), "bitrate", bitrate, NULL);

      /* create codecparse */
      codecparse = gst_element_factory_make ("h264parse", "h264-parser");
      if (!codecparse) {
        g_printerr ("codecparse element could not be created. Exiting.\n");
      }
      /* create mux */
      mux = gst_element_factory_make (NVDS_ELEM_MUX_MP4, "mux");
      if (!mux) {
        g_printerr ("mux element could not be created. Exiting.\n");
      }

      /* create sink */
      sink = gst_element_factory_make (NVDS_ELEM_SINK_FILE, "filesink");
      if (!sink) {
        g_printerr ("sink element could not be created. Exiting.\n");
      }
      g_object_set (G_OBJECT (sink), "location", "capture.mp4", "sync", 0, "async" , FALSE, NULL);

    //   /* Finally render the osd output */
    #ifdef PLATFORM_TEGRA
      transform = gst_element_factory_make ("nvegltransform", "nvegl-transform");
    #endif
    //   sink = gst_element_factory_make (NVDS_ELEM_SINK_FILE, "filesink");
    //   g_object_set (G_OBJECT (sink), "location", "capture.mp4", "sync", 0, "async" , FALSE, NULL);

      if (!pgie || !nvtracker || !nvdsanalytics || !nvvidconv ||
          !nvosd || !nvvidconv1 || !cap_filter || !encoder || !codecparse || !mux || !sink) {
        g_printerr ("One element could not be created. Exiting.\n");
        return -1;
      }

    #ifdef PLATFORM_TEGRA
      if(!transform) {
        g_printerr ("One tegra element could not be created. Exiting.\n");
        return -1;
      }
    #endif

      g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
          MUXER_OUTPUT_HEIGHT, "batch-size", num_sources,
          "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);

      /* Configure the nvinfer element using the nvinfer config file. */
      g_object_set (G_OBJECT (pgie),
          "config-file-path", "nvdsanalytics_pgie_config.txt", NULL);

      /* Configure the nvtracker element for using the particular tracker algorithm. */
      g_object_set (G_OBJECT (nvtracker),
          "ll-lib-file", "/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so",
          "ll-config-file", "tracker_config.yml", "tracker-width", 640, "tracker-height", 480,
           NULL);

      /* Configure the nvdsanalytics element for using the particular analytics config file*/
      g_object_set (G_OBJECT (nvdsanalytics),
          "config-file", "config_nvdsanalytics.txt",
           NULL);

      /* Override the batch-size set in the config file with the number of sources. */
      g_object_get (G_OBJECT (pgie), "batch-size", &pgie_batch_size, NULL);
      if (pgie_batch_size != num_sources) {
        g_printerr
            ("WARNING: Overriding infer-config batch-size (%d) with number of sources (%d)\n",
            pgie_batch_size, num_sources);
        g_object_set (G_OBJECT (pgie), "batch-size", num_sources, NULL);
      }

      tiler_rows = (guint) sqrt (num_sources);
      tiler_columns = (guint) ceil (1.0 * num_sources / tiler_rows);
      /* we set the tiler properties here */
      g_object_set (G_OBJECT (tiler), "rows", tiler_rows, "columns", tiler_columns,
          "width", TILED_OUTPUT_WIDTH, "height", TILED_OUTPUT_HEIGHT, NULL);

      /* we add a message handler */
      bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
      bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
      gst_object_unref (bus);

      /* Set up the pipeline */
      /* we add all elements into the pipeline */
    #ifdef PLATFORM_TEGRA
      gst_bin_add_many (GST_BIN (pipeline), pgie, nvtracker, nvdsanalytics ,
              nvvidconv, nvosd, nvvidconv1, cap_filter, encoder, codecparse, mux, sink,
          NULL);

      /* we link the elements together
       * nvstreammux -> nvinfer -> nvtracker -> nvdsanalytics -> nvtiler ->
       * nvvideoconvert -> nvosd -> transform -> sink
       */
      if (!gst_element_link_many (streammux, pgie, nvtracker, nvdsanalytics,
                                  nvvidconv, nvosd, nvvidconv1, cap_filter, encoder, codecparse, mux, sink, NULL)) {
        g_printerr ("Elements could not be linked. Exiting.\n");
        return -1;
      }
    #else
      gst_bin_add_many (GST_BIN (pipeline), pgie, nvtracker, nvdsanalytics,
                        nvvidconv, nvosd, nvvidconv1, cap_filter, encoder, codecparse, mux, sink, NULL);
      /* we link the elements together
       * nvstreammux -> nvinfer -> nvtracker -> nvdsanalytics -> nvtiler ->
       * nvvideoconvert -> nvosd -> sink
       */
      if (!gst_element_link_many (streammux, pgie, nvtracker, nvdsanalytics,
          nvvidconv, nvosd, nvvidconv1, cap_filter, encoder, codecparse, mux, sink, NULL)) {
        g_printerr ("Elements could not be linked. Exiting.\n");
        return -1;
      }
    #endif

      /* Lets add probe to get informed of the meta data generated, we add probe to
       * the sink pad of the nvdsanalytics element, since by that time, the buffer
       * would have had got all the metadata.
       */
      nvdsanalytics_src_pad = gst_element_get_static_pad (nvdsanalytics, "src");
      if (!nvdsanalytics_src_pad)
        g_print ("Unable to get src pad\n");
      else
        gst_pad_add_probe (nvdsanalytics_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
            nvdsanalytics_src_pad_buffer_probe, NULL, NULL);

      /* Set the pipeline to "playing" state */
      g_print ("Now playing:");
      for (i = 0; i < num_sources; i++) {
        g_print (" %s,", argv[i + 1]);
      }
      g_print ("\n");
      gst_element_set_state (pipeline, GST_STATE_PLAYING);

      /* Wait till pipeline encounters an error or EOS */
      g_print ("Running...\n");
      g_main_loop_run (loop);

      /* Out of the main loop, clean up nicely */
      g_print ("Returned, stopping playback\n");
      gst_element_set_state (pipeline, GST_STATE_NULL);
      g_print ("Deleting pipeline\n");
      gst_object_unref (GST_OBJECT (pipeline));
      g_source_remove (bus_watch_id);
      g_main_loop_unref (loop);
      return 0;
    }

Please have a look.
Thanks in advance

Hello @CJR,
I have seen one post on deepstream forum about adding filesink in deepstream-test1 app the links is -

Encoding and saving to a file with deepstream_test1_app.c

The solution pipeline suggested is -

source->h264parser->decoder->pgie->filter1->nvvidconv->filter2->nvosd->nvvidconv1->filter3->videoconvert->filter4->x264enc->qtmux->filesink

In the above post it is not mentioned that which platform they are using. I don’t know if this solution will work for me or not because they don’t have nvstreammux plugin which is present in my deepstream-test1 app.

Do you think I can use above pipeline or it can work with some changes or it is platform dependent? Please let me know.

Hello @CJR,
I tried above method but it is not working. I think above given solution will not work for my usecase.
Please can you help in solving the error which I have posted along with the code.

Hi,

There is the small correction in the pipeline. The element after OSD should be videoconvert and not nvvideoconvert.

(deepstream-nvdsanalytics-test:203): GStreamer-WARNING **: 16:08:13.115: Name ‘nvvideo-converter’ is not unique in bin ‘nvdsanalytics-test-pipeline’, not adding

There are two elements with the same name “nvvideo-converter” but looking at your code it seems like you have rectified this already.