Deepstream dGPU Triton Python Bindings OpenCV ONNX

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
dGPU GTX1080
• DeepStream Version
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

  • Run a container based on
  • Install NVidia deepstream python bindings.
  • Use apps/deepstream-test3
  • Change pipeline from
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    pgie = Gst.ElementFactory.make("nvinferserver", "primary-inference").
  • Add a TRTIS valid config file to load an ONNX model.
  • Add import cv2 to No need to call any method in the program.
  • Run python3 file:///path/to/video

With that recipe, the execution fails, see error below.
If import cv2 is removed from .py file, all works again.
I have tried loading tensorflow_graphdef model, and it works.
Problem seems to show up with ONNX models, Triton, and OpenCV.

Thanks in advance.

I0603 16:23:33.128388 804] loading: higher_hrnet:1
E0603 16:23:33.226385 804] failed to load ‘higher_hrnet’ version 1: Not found: unable to load backend library: /opt/tritonserver/backends/onnxruntime/ undefined symbol: _ZN3tbb8internal13numa_topology4fillEPi
ERROR: infer_trtis_server.cpp:1044 Triton: failed to load model higher_hrnet, triton_err_str:Invalid argument, err_msg:load failed for model ‘higher_hrnet’: version 1: Not found: unable to load backend library: /opt/tritonserver/backends/onnxruntime/ undefined symbol: _ZN3tbb8internal13numa_topology4fillEPi;

is your cv2 installed by ?

# pip3 install opencv-python

1 Like

can’t reproduce the issue with below change to deepstream_test_3 which imports cv2 in script and switches to nvinferserver with densenet_onnx model.

diff --git a/ b/
index 81354a4..9015252 100644
--- a/
+++ b/
@@ -40,6 +40,14 @@ from common.FPS import GETFPS

 import pyds

+import cv2
+print("OpenCV Version: {}".format(cv2.__version__))
+image = cv2.imread("/opt/nvidia/deepstream/deepstream/samples/streams/sample_industrial.jpg")
+image = cv2.rotate(image, cv2.cv2.ROTATE_90_CLOCKWISE)
+cv2.imwrite("test.png", image)

@@ -106,7 +114,7 @@ def tiler_src_pad_buffer_probe(pad,info,u_data):
             except StopIteration:
-            obj_counter[obj_meta.class_id] += 1
+            #obj_counter[obj_meta.class_id] += 1
             except StopIteration:
@@ -266,7 +274,8 @@ def main(args):
     print("Creating Pgie \n ")
-    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
+    #pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
+    pgie = Gst.ElementFactory.make("nvinferserver", "primary-inference")
     if not pgie:
         sys.stderr.write(" Unable to create pgie \n")
     print("Creating tiler \n ")
@@ -290,7 +299,8 @@ def main(args):
             sys.stderr.write(" Unable to create transform \n")

     print("Creating EGLSink \n")
-    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
+    #sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
+    sink = Gst.ElementFactory.make("fakesink", "nvvideo-renderer")
     if not sink:
         sys.stderr.write(" Unable to create egl sink \n")

@@ -302,7 +312,7 @@ def main(args):
     streammux.set_property('height', 1080)
     streammux.set_property('batch-size', number_sources)
     streammux.set_property('batched-push-timeout', 4000000)
-    pgie.set_property('config-file-path', "dstest3_pgie_config.txt")
+    pgie.set_property('config-file-path', "/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-trtis/config_infer_primary_classifier_densenet_onnx.txt")
     if(pgie_batch_size != number_sources):
         print("WARNING: Overriding infer-config batch-size",pgie_batch_size," with number of sources ", number_sources," \n")

Great, thanks, wasn’t aware of this detail.
By the way, can you provide a short explanation of this behavior?