Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
dGPU GTX1080 • DeepStream Version
5.1 • TensorRT Version
7.2.2 • NVIDIA GPU Driver Version (valid for GPU only)
465.27 • Issue Type( questions, new requirements, bugs)
Bug • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Change pipeline from pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
to pgie = Gst.ElementFactory.make("nvinferserver", "primary-inference").
Add a TRTIS valid config file to load an ONNX model.
Add import cv2 to deepstream_test_3.py. No need to call any method in the program.
Run python3 deepstream-test3.py file:///path/to/video
With that recipe, the execution fails, see error below.
If import cv2 is removed from .py file, all works again.
I have tried loading tensorflow_graphdef model, and it works.
Problem seems to show up with ONNX models, Triton, and OpenCV.
Thanks in advance.
I0603 16:23:33.128388 804 model_repository_manager.cc:810] loading: higher_hrnet:1
E0603 16:23:33.226385 804 model_repository_manager.cc:986] failed to load ‘higher_hrnet’ version 1: Not found: unable to load backend library: /opt/tritonserver/backends/onnxruntime/libinference_engine.so: undefined symbol: _ZN3tbb8internal13numa_topology4fillEPi
ERROR: infer_trtis_server.cpp:1044 Triton: failed to load model higher_hrnet, triton_err_str:Invalid argument, err_msg:load failed for model ‘higher_hrnet’: version 1: Not found: unable to load backend library: /opt/tritonserver/backends/onnxruntime/libinference_engine.so: undefined symbol: _ZN3tbb8internal13numa_topology4fillEPi;