Creating custom plugin in Deepstream

Hello everyone,
I am trying to build a custom plugin for “object counter” that means if some object is crossing a line of given coordinate it should get counted. I am using Yolov3 based inference as provided with Deepstream SDK docker container in "source directory " and KLT tracker for tracking the object.
I am following this post - []. This post provide basic information about gst-dsexample plugin.

I am fairly new to Deepstream and video analytics in general. I didn’t got much information with the above post I have shared. I am not able to find a good procedure to follow for writing my custom plugin or any method to do my project.

At the end, I want my custom plugin or any other method to perform following task-

  1. To be able to extract required information from the buffer like bounding box
    coordinates and their unique tracking ID.
  2. To be able to count the object crossing the line and update the count and line
    coordinate in the buffer which can be shown in output stream.
    I already have the code in python for creating line counter algorithm but I don’t know how to implement that in Deepstream.

Please help me and let me know if any other information is required from my side to solve the issue.
• Hardware Platform GPU (Tesla K20Xm)
• DeepStream Version 4.0.2-19.12
• TensorRT Version 6.0.1
• NVIDIA GPU Driver Version-440.33.01
Thanks in advance.


Line crossing has already been implemented in dsanalytics plugin, could you please take a look at that ?

Here’s the documentation link.

Thank you @CJR for the help. I will definitely look into it and will come back to you.

Hello @CJR,
I have gone through the documentation which you have shared and definitely this is exactly what I was looking for. But I am not able to use the plugin. I tried two methods for using dsanalytics plugin for my use-case and that is -

  1. I tried running Deepstream-nvdsanalytics-test in “/source/apps/sample-apps” by following README. But since I am using server and Deepstream docker container I was not able to run the test as I think it contains Eglsink and tiled display and I don’t know how to change it.

  2. I tried copying context in config_nvdsanalytics.txt from “/source/apps/sampe-app/deepstream-nvdsanalytics-test” into one of the config file in “/sample/config/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt”. I disabled tiled display and changed sink type to File in config file, so that I can save the video but I didn’t got any output in the video regarding dsanalytics. Previously I tried same method with ds-example plugin and that was working fine.

I want to integrate tracker and dsanalytics plugin with Yolov3 config file given in “/source/objectdetection_Yolo”. I was successfully able to integrate tracker by adding details in Yolov3 config file but I don’t know how to integrate dsanalytics in the same way as I tried same thing in method two mentioned above.

Please help me in understanding the correct way to use dsanalytics plugin.

Hello @CJR,
I have tried running the deepstream_nvdsanalytcis_test using gst-launch command mentioned in the documentation. I have changed paths of config files to desired path and also changed egl sink to file sink (filesink location=capture.mp4). I got the capture.mp4 file but it was empty nothing got written.
The command for gst-launch I have used is -
“gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 live-source=0 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt ! nvtracker ll-lib-file = /opt/nvidia/deepstream/deepstream/lib/ ll-config-file=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/tracker_config.yml tracker-width=640 tracker-height=384 ! nvdsanalytics config-file=config_nvdsanalytics.txt ! nvmultistreamtiler ! nvvideoconvert ! nvdsosd ! filesink location=capture.mp4”.

Please help me to solve this issue.

Hello everyone,
Please can anyone help me with my problem. It’s been very long still I am not able to solve my issue.
Please help.


Apologies for the delay in response. You can change the type of sink in the Deepstream-nvdsanalytics-test by changing
sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");
sink = gst_element_factory_make ("filesink", "filesink");
g_object_set (G_OBJECT (sink), "location", capture.mp4, NULL);

Hello @CJR,
Thank you very much now I am getting details like direction and line crossing getting printed in the terminal. But the video I am saving is empty nothing got written in it.
One more thing I want to ask is, how can I show details like lines, counts, regions, etc. directly on the frame like bounding-box and tracker ID shown. Do I have to change code in OSD sink or there is much easier way to do it.
Again thank you for helping me.

Sorry you’ll need a few more changes that just switching to filesink.

You will need to connect osd->nvideoconvvert->caps filter(x/raw)->encoder->codecparse->mux->filesink.

You can refer to create_encode_file_bin function in deepstream_sink_bin.c where a similar snippet has been implemented.

Any info you want to be displayed can be added to the osd metadata before the buffer enters the OSD’s sink pad. Typically this would be done in a probe. In the nvdsanalytics-test sample, you can refer to nvdsanalytics_src_pad_buffer_probe function.

Hello @CJR,
Thank you for the response. I went through create_encode_file_bin function but I find it difficult to understand since after creating each element the function is taking details from config for setting elements.

Is there any easy way to do it like attaching nvdsanalytics plugin with sample apps in samples/config/deepstream-app. There, it is easy to modify parameters using config file.

We will be adding dsanalytics plugin support in deepstream-app in the next release. Since the sources are already available to you, you can do it yourself as well. If you check how ds-example plugin is added in the deepstream-app sources. There are comments in the code explaining what needs to be done. You can start by viewing create_dsexample_bin function in deepstream-app sources and make similar changes for dsanalytics plugin. The approach i suggested in my previous answer would be easier to implement, but less flexible since dsanalytics-test app does not have a config file.

I will go for the easier option, since I fairly new to the deepstream. I will try out the previous suggestion and let you know in case of any doubt.
Thank you.

Hello @CJR, sorry for causing trouble but I have one question. I just simply have to connect extra elements with other elements. Hence I don’t have to use bin I just have to connect all elements like-

pgie->nvtracker->nvdsanalytics->tiler->nvvidconv->nvosd->nvideoconvvert->caps filter(x/raw)->encoder->codecparse->mux->filesink

please correct me if I am wrong.

you’re right

Thank you for your conformation. I will try it right away. one more thing do we have to use nvvidconv both before and after nvosd.

Yes, we do. OSD needs the input to be in RGBA format while the encoders accept I420/NV12 formats.