DeepStream SDK FAQ

17.[DeepStream_dGPU_App] Using OpenCV to run deepstream pipeline

Sometimes the gstreamer pipeline in opencv will fail. Please refer to the following topic to resolve this problem.

How to compile OpenCV with Gstreamer [Ubuntu&Windows] | by Galaktyk 01 | Medium

18. Open model deployment on DeepStream (Thanks for the sharing!)
Yolo2/3/4/5/OR : Improved DeepStream for YOLO models (Thanks @marcoslucianops )
YoloV4 : GitHub - NVIDIA-AI-IOT/yolo_deepstream: yolo model qat and deploy with deepstream&tensorrt + deepstream_yolov4.tgz - Google Drive
YoloV4+dspreprocess : deepstream_yolov4_with_nvdspreprocess.tgz - Google Drive
YoloV5 + nvinfer : GitHub - beyondli/Yolo_on_Jetson
Yolov5-small : Custom Yolov5 on Deepstream 6.0 (Thanks @raghavendra.ramya)
YoloV5+Triton : Triton Inference through docker - #7 by mchi
YoloV5_gpu_optimization: GitHub - NVIDIA-AI-IOT/yolov5_gpu_optimization: This repository provides YOLOV5 GPU optimization sample
YoloV7: GitHub - NVIDIA-AI-IOT/yolo_deepstream: yolo model qat and deploy with deepstream&tensorrt
YoloV7+Triton: Deepstream / Triton Server - YOLOv7(Thanks @Levi_Pereira )
YoloV7+nvinfer: Tutorial: How to run YOLOv7 on Deepstream(Thanks @vcmike )
YoloV8+nvinfer: Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK | Seeed Studio Wiki

19. [DSx_All_App] How to use classification model as pgie?
The input is a blue car picture, we want to get the blue label, here is the test command:
blueCar.zip (37.6 KB)
dstest_appsrc_config.txt (3.7 KB)

gst-launch-1.0 filesrc location=blueCar.jpg ! jpegdec ! videoconvert ! video/x-raw,format=I420 ! nvvideoconvert ! video/x-raw\(memory:NVMM\),format=NV12 ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=./dstest_appsrc_config.txt ! nvvideoconvert ! video/x-raw\(memory:NVMM\),format=RGBA ! nvdsosd ! nvvideoconvert ! video/x-raw,format=I420 ! jpegenc ! filesink location=out.jpg

[Access output of Primary Classifier]
[Resnet50 with imagenet dataset image classification using deepstream sdk]

20. How to trouble shoot error cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1

CUDA_ERROR_INVALID_GRAPHICS_CONTEXT = 219

This indicates an error with OpenGL or DirectX context.

Make sure you use nvidia X driver.
Please follow this to setup nvidia X server. Chapter 6. Configuring X for the NVIDIA Driver
These are some common problems you may meet associated with the driver. Chapter 8. Common Problems (nvidia.com)

https://forums.developer.nvidia.com/t/issue-runnung-deepstream-app-docker-container-5-0-6-0-in-rtx-3080-and-a5000-laptop/213783
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1 - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

21.[Jetson] TRT version miss match between Deepstream 6.1 docker and device version can be fixed by APT update for Jetpack 5.0.1 DP

1 docker run --rm -it --runtime=nvidia REPOSITORY:TAG
2 remove previous TRT package
  apt-get purge --remove libnvinfer8 libnvinfer-plugin8  libnvinfer-bin python3-libnvinfer
3 apt-get update 
4 install TRT 8.4.0.11 package
  apt-get install libnvinfer8 libnvinfer-plugin8  libnvinfer-bin python3-libnvinfer 
5 Verify TRT version
  nm -D /usr/lib/aarch64-linux-gnu/libnvinfer.so.8.4.0 |grep version

related topic 218888

22. [Jetson] VIC Configuration failed image scale factor exceeds 16
this issue is a limitation of Jetson VIC processing and can be fixed by modifying configuration, for example:

# model's dimensions: height is 1168, width is 720.
uff-input-dims=3;1168;720;0  
#if scaling-compute-hw = VIC, input-object-min-height need to be even and greater than or equal to (model height)/16  
input-object-min-height=74
#if scaling-compute-hw = VIC, input-object-min-width need to be even and greater than or equal to( model width)/16  
input-object-min-width=46

related topic [VIC Configuration failed image scale factor exceeds 16, use GPU for Transformation - #3 by Amycao]

23 How to change python sample apps from display to output file or fakesink for the users who do not have monitor in their device, the patch is based on test1 sample.

Usage: python3 deepstream_test_1.py <media file or uri> <sink type: 1-filesink; 2-fakesink; 3-display sink>

nvidia@ubuntu:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1$ diff -Naur deepstream_test_1.py.orig deepstream_test_1.py
--- deepstream_test_1.py.orig	2022-08-15 20:12:39.809775283 +0800
+++ deepstream_test_1.py	2022-08-15 22:06:27.052250778 +0800
@@ -123,8 +123,8 @@
 
 def main(args):
     # Check input arguments
-    if len(args) != 2:
-        sys.stderr.write("usage: %s <media file or uri>\n" % args[0])
+    if len(args) != 3:
+        sys.stderr.write("usage: %s <media file or uri> <sink type: 1-filesink; 2-fakesink; 3-display sink>\n" % args[0])
         sys.exit(1)
 
     # Standard GStreamer initialization
@@ -179,14 +179,46 @@
     if not nvosd:
         sys.stderr.write(" Unable to create nvosd \n")
 
-    # Finally render the osd output
-    if is_aarch64():
-        transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
-
-    print("Creating EGLSink \n")
-    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
-    if not sink:
-        sys.stderr.write(" Unable to create egl sink \n")
+    if args[2] == '1':
+
+        nvvidconv1 = Gst.ElementFactory.make ("nvvideoconvert", "nvvid-converter1")
+        if not nvvidconv1:
+            sys.stderr.write("Unable to create nvvidconv1")
+        capfilt = Gst.ElementFactory.make ("capsfilter", "nvvideo-caps")
+        if not capfilt:
+            sys.stderr.write("Unable to create capfilt")
+        caps = Gst.caps_from_string ('video/x-raw(memory:NVMM), format=I420')
+#        feature = gst_caps_features_new ("memory:NVMM", NULL)
+#        gst_caps_set_features (caps, 0, feature)
+        capfilt.set_property('caps', caps)
+        print("Creating nvv4l2h264enc \n")
+        nvh264enc = Gst.ElementFactory.make ("nvv4l2h264enc" ,"nvvideo-h264enc")
+        if not nvh264enc:
+            sys.stderr.write("Unable to create nvh264enc")
+        print("Creating filesink \n")    
+        sink = Gst.ElementFactory.make ("filesink", "nvvideo-renderer")
+        sink.set_property('location', './out.h264')
+        if not sink:
+            sys.stderr.write("Unable to create filesink")
+
+    elif args[2] == '2':
+
+        print("Creating fakesink \n")
+        sink = Gst.ElementFactory.make ("fakesink", "fake-renderer")
+        if not sink:
+            sys.stderr.write("Unable to create fakesink")
+
+    elif args[2] == '3':
+
+        print("Creating EGLSink \n")
+        sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
+        if not sink:
+            sys.stderr.write(" Unable to create egl sink \n")
+        if is_aarch64():
+            transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
+            if not transform:
+                sys.stderr.write(" Unable to create egl transform \n")
 
     print("Playing file %s " %args[1])
     source.set_property('location', args[1])
@@ -204,9 +236,17 @@
     pipeline.add(pgie)
     pipeline.add(nvvidconv)
     pipeline.add(nvosd)
-    pipeline.add(sink)
-    if is_aarch64():
-        pipeline.add(transform)
+    if args[2] == '1':
+        pipeline.add(nvvidconv1)
+        pipeline.add(capfilt)
+        pipeline.add(nvh264enc)
+        pipeline.add(sink)
+    elif args[2] == '2':
+        pipeline.add(sink)
+    elif args[2] == '3':
+        pipeline.add(sink)
+        if is_aarch64():
+            pipeline.add(transform)
 
     # we link the elements together
     # file-source -> h264-parser -> nvh264-decoder ->
@@ -225,11 +265,19 @@
     streammux.link(pgie)
     pgie.link(nvvidconv)
     nvvidconv.link(nvosd)
-    if is_aarch64():
-        nvosd.link(transform)
-        transform.link(sink)
-    else:
+    if args[2] == '1':
+        nvosd.link(nvvidconv1)
+        nvvidconv1.link(capfilt)
+        capfilt.link(nvh264enc)
+        nvh264enc.link(sink)
+    elif args[2] == '2':
         nvosd.link(sink)
+    elif args[2] == '3':
+        if is_aarch64():
+            nvosd.link(transform)
+            transform.link(sink)
+        else:
+            nvosd.link(sink)
 
     # create an event loop and feed gstreamer bus mesages to it

24. [DeepStream 6.1.1 GA] simple demo for adding dewarper support to deepstream-app

Usege: deepstream-app -c source1_dewarper_test.txt

source1_dewarper_test.txt (3.6 KB)

---
 .../src/deepstream_config_file_parser.c       |  15 ++-
 .../common/src/deepstream_source_bin.c        |   5 -
 .../common/src/deepstream_streammux.c         |   5 +-
 .../deepstream_app_config_parser.c            |   7 +-
 .../deepstream_app_config_parser_yaml.cpp     |   4 +

diff --git a/apps/deepstream/common/src/deepstream_config_file_parser.c b/apps/deepstream/common/src/deepstream_config_file_parser.c
--- a/apps/deepstream/common/src/deepstream_config_file_parser.c
+++ b/apps/deepstream/common/src/deepstream_config_file_parser.c
@@ -76,6 +76,8 @@ GST_DEBUG_CATEGORY (APP_CFG_PARSER_CAT);
 #define CONFIG_GROUP_STREAMMUX_FRAME_NUM_RESET_ON_STREAM_RESET "frame-num-reset-on-stream-reset"
 #define CONFIG_GROUP_STREAMMUX_FRAME_NUM_RESET_ON_EOS "frame-num-reset-on-eos"
 #define CONFIG_GROUP_STREAMMUX_FRAME_DURATION "frame-duration"
+#define CONFIG_GROUP_STREAMMUX_NUM_SURFACES_PER_FRAME "num-surfaces-per-frame"
+
 #define CONFIG_GROUP_STREAMMUX_CONFIG_FILE_PATH "config-file"
 #define CONFIG_GROUP_STREAMMUX_SYNC_INPUTS "sync-inputs"
 #define CONFIG_GROUP_STREAMMUX_MAX_LATENCY "max-latency"
@@ -742,6 +744,11 @@ parse_streammux (NvDsStreammuxConfig *config, GKeyFile *key_file, gchar *cfg_fil
           g_key_file_get_boolean(key_file, CONFIG_GROUP_STREAMMUX,
           CONFIG_GROUP_STREAMMUX_ASYNC_PROCESS, &error);
       CHECK_ERROR(error);
+    } else if (!g_strcmp0(*key, CONFIG_GROUP_STREAMMUX_NUM_SURFACES_PER_FRAME)) {
+        config->num_surface_per_frame =
+            g_key_file_get_integer(key_file, CONFIG_GROUP_STREAMMUX,
+            CONFIG_GROUP_STREAMMUX_NUM_SURFACES_PER_FRAME, &error);
+        CHECK_ERROR(error);
     } else {
       NVGSTDS_WARN_MSG_V ("Unknown key '%s' for group [%s]", *key,
           CONFIG_GROUP_STREAMMUX);
@@ -1070,8 +1077,12 @@ parse_dewarper (NvDsDewarperConfig * config, GKeyFile * key_file, gchar *cfg_fil
         g_key_file_get_integer (key_file, CONFIG_GROUP_DEWARPER,
             CONFIG_GROUP_DEWARPER_NUM_SURFACES_PER_FRAME, &error);
       CHECK_ERROR (error);
-    }
-    else {
+    } else if (!g_strcmp0 (*key, CONFIG_GROUP_DEWARPER_SOURCE_ID)) {
+      config->source_id =
+          g_key_file_get_integer (key_file, CONFIG_GROUP_DEWARPER,
+          CONFIG_GROUP_DEWARPER_SOURCE_ID, &error);
+      CHECK_ERROR (error);
+    } else {
       NVGSTDS_WARN_MSG_V ("Unknown key '%s' for group [%s]", *key,
           CONFIG_GROUP_DEWARPER);
     }
diff --git a/apps/deepstream/common/src/deepstream_source_bin.c b/apps/deepstream/common/src/deepstream_source_bin.c
--- a/apps/deepstream/common/src/deepstream_source_bin.c
+++ b/apps/deepstream/common/src/deepstream_source_bin.c
@@ -1527,11 +1527,6 @@ create_multi_source_bin (guint num_sub_bins, NvDsSourceConfig * configs,
       goto done;
     }
 
-    if(configs->dewarper_config.enable) {
-        g_object_set(G_OBJECT(bin->sub_bins[i].dewarper_bin.nvdewarper), "source-id",
-                configs[i].source_id, NULL);
-    }
-
     bin->num_bins++;
   }
   NVGSTDS_BIN_ADD_GHOST_PAD (bin->bin, bin->streammux, "src");
diff --git a/apps/deepstream/common/src/deepstream_streammux.c b/apps/deepstream/common/src/deepstream_streammux.c
--- a/apps/deepstream/common/src/deepstream_streammux.c
+++ b/apps/deepstream/common/src/deepstream_streammux.c
@@ -92,7 +92,10 @@ set_streammux_properties (NvDsStreammuxConfig *config, GstElement *element)
                config->max_latency, NULL);
   g_object_set (G_OBJECT (element), "frame-num-reset-on-eos",
       config->frame_num_reset_on_eos, NULL);
-
+  if (config->num_surface_per_frame > 1) {
+      g_object_set (G_OBJECT (element), "num-surfaces-per-frame",
+          config->num_surface_per_frame, NULL);
+  }
   ret= TRUE;
 
   return ret;
diff --git a/apps/deepstream/sample_apps/deepstream-app/deepstream_app_config_parser.c b/apps/deepstream/sample_apps/deepstream-app/deepstream_app_config_parser.c
--- a/apps/deepstream/sample_apps/deepstream-app/deepstream_app_config_parser.c
+++ b/apps/deepstream/sample_apps/deepstream-app/deepstream_app_config_parser.c
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
+ * Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.
  *
  * Permission is hereby granted, free of charge, to any person obtaining a
  * copy of this software and associated documentation files (the "Software"),
@@ -373,6 +373,11 @@ parse_config_file (NvDsConfig *config, gchar *cfg_file_path)
       parse_err = !parse_osd (&config->osd_config, cfg_file);
     }
 
+    if (!g_strcmp0 (*group, CONFIG_GROUP_DEWARPER)) {
+      parse_err = !parse_dewarper (&config->multi_source_config[0].dewarper_config,
+          cfg_file, cfg_file_path);
+    }
+
     if (!g_strcmp0 (*group, CONFIG_GROUP_PREPROCESS)) {
         parse_err =
             !parse_preprocess (&config->preprocess_config, cfg_file,
diff --git a/apps/deepstream/sample_apps/deepstream-app/deepstream_app_config_parser_yaml.cpp b/apps/deepstream/sample_apps/deepstream-app/deepstream_app_config_parser_yaml.cpp
--- a/apps/deepstream/sample_apps/deepstream-app/deepstream_app_config_parser_yaml.cpp
+++ b/apps/deepstream/sample_apps/deepstream-app/deepstream_app_config_parser_yaml.cpp
@@ -129,6 +129,7 @@ parse_config_file_yaml (NvDsConfig *config, gchar *cfg_file_path)
   std::string sink_str = "sink";
   std::string sgie_str = "secondary-gie";
   std::string msgcons_str = "message-consumer";
+  std::string dewarper_str = "dewarper";
 
   config->source_list_enabled = FALSE;
 
@@ -183,6 +184,9 @@ parse_config_file_yaml (NvDsConfig *config, gchar *cfg_file_path)
     else if (paramKey == "osd") {
       parse_err = !parse_osd_yaml(&config->osd_config, cfg_file_path);
     }
+    else if (paramKey.compare(0, dewarper_str.size(), dewarper_str) == 0) {
+      parse_err = !parse_dewarper_yaml (&config->multi_source_config[0].dewarper_config, cfg_file_path);
+    }
     else if (paramKey == "pre-process") {
       parse_err = !parse_preprocess_yaml(&config->preprocess_config, cfg_file_path);
     }

25. [ALL_ALL_nvdsinfer] Add TensorRT Verbose log

To debug nvinfer related issue inside gst-nvinfer, we can enable nvinfer log with setting the enviroment variable “NVDSINFER_LOG_LEVEL”.

The value can be set to following numbers for different level log:

0: NVDSINFER_LOG_ERROR
1: NVDSINFER_LOG_WARNING
2: NVDSINFER_LOG_INFO
3: NVDSINFER_LOG_DEBUG

Example for enabling debug log:
export NVDSINFER_LOG_LEVEL=3

When the NVDSINFER_LOG_LEVEL environment variable is not set, the default log level is error log.

[ALL_ALL_common] Historical DeepStream documents links

For the users who are still working on old DeepStream versions, the historical DeepStream documents can be found in the following links: