Deepstream 6.4 with Triton

Please provide complete information as applicable to your setup.

Hardware Platform (Jetson / GPU) : NVIDIA A2
• DeepStream Version : 6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version : 12.2
• NVIDIA GPU Driver Version (valid for GPU only) : 535.154.05
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I am using deepstream_test3.py and added tracker and Sgie1 and Sgie2 models with triton

and I am getting below output

Output

[06-Feb-24 19:48:32][ERROR] Error: decodebin cannot be found
[06-Feb-24 19:48:32][ERROR] source not found in decodebin child
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:01:32.280886371 1 0x5653e6d474c0 WARN nvinferserver gstnvinferserver_impl.cpp:360:validatePluginConfig: warning: Configuration file batch-size reset to: 40
ERROR: infer_trtis_server.cpp:994 Triton: failed to create repo server, triton_err_str:Not found, err_msg:unable to load shared library: /opt/tritonserver/backends/pytorch/libtorchtrt_runtime.so: undefined symbol: _ZN3c106detail14torchCheckFailEPKcS2_jRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
ERROR: infer_trtis_server.cpp:840 failed to initialize trtserver on repo dir: root: “/opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo”
strict_model_config: true

0:01:32.282805008 1 0x5653e6d474c0 ERROR nvinferserver gstnvinferserver.cpp:408:gst_nvinfer_server_logger: nvinferserver[UID 1]: Error in createNNBackend() <infer_trtis_context.cpp:256> [UID = 1]: model:peoplenet get triton server instance failed. repo:root: “/opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo”
strict_model_config: true

0:01:32.282815648 1 0x5653e6d474c0 ERROR nvinferserver gstnvinferserver.cpp:408:gst_nvinfer_server_logger: nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:79> [UID = 1]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRITON_ERROR
0:01:32.282828018 1 0x5653e6d474c0 WARN nvinferserver gstnvinferserver_impl.cpp:592:start: error: Failed to initialize InferTrtIsContext
0:01:32.282832148 1 0x5653e6d474c0 WARN nvinferserver gstnvinferserver_impl.cpp:592:start: error: Config file path: /opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo/peoplenet/config_triton_infer_primary_peoplenet.txt
0:01:32.282853438 1 0x5653e6d474c0 WARN nvinferserver gstnvinferserver.cpp:518:gst_nvinfer_server_start: error: gstnvinferserver_impl start failed
WARNING: infer_proto_utils.cpp:144 auto-update preprocess.network_format to IMAGE_FORMAT_RGB
[NvMultiObjectTracker] De-initialized
[06-Feb-24 19:48:33][INFO]
**PERF: {‘stream0’: 0.0, ‘stream1’: 0.0, ‘stream2’: 0.0, ‘stream3’: 0.0, ‘stream4’: 0.0, ‘stream5’: 0.0, ‘stream6’: 0.0, ‘stream7’: 0.0, ‘stream8’: 0.0, ‘stream9’: 0.0, ‘stream10’: 0.0, ‘stream11’: 0.0, ‘stream12’: 0.0, ‘stream13’: 0.0, ‘stream14’: 0.0, ‘stream15’: 0.0, ‘stream16’: 0.0, ‘stream17’: 0.0, ‘stream18’: 0.0, ‘stream19’: 0.0, ‘stream20’: 0.0, ‘stream21’: 0.0, ‘stream22’: 0.0, ‘stream23’: 0.0, ‘stream24’: 0.0, ‘stream25’: 0.0, ‘stream26’: 0.0, ‘stream27’: 0.0, ‘stream28’: 0.0, ‘stream29’: 0.0, ‘stream30’: 0.0, ‘stream31’: 0.0, ‘stream32’: 0.0, ‘stream33’: 0.0, ‘stream34’: 0.0, ‘stream35’: 0.0, ‘stream36’: 0.0, ‘stream37’: 0.0, ‘stream38’: 0.0, ‘stream39’: 0.0}

Warning: gst-library-error-quark: Configuration file batch-size reset to: 40 (5): gstnvinferserver_impl.cpp(360): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
Warning: gst-library-error-quark: Configuration file batch-size reset to: 40 (5): gstnvinferserver_impl.cpp(360): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
Warning: gst-library-error-quark: Configuration file batch-size reset to: 40 (5): gstnvinferserver_impl.cpp(360): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
Warning: gst-library-error-quark: Configuration file batch-size reset to: 40 (5): gstnvinferserver_impl.cpp(360): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1): gstnvinferserver_impl.cpp(592): start (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference:
Config file path: /opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo/peoplenet/config_triton_infer_primary_peoplenet.txt
Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1): gstnvinferserver_impl.cpp(592): start (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference:
Config file path: /opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo/peoplenet/config_triton_infer_primary_peoplenet.txt
Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1): gstnvinferserver_impl.cpp(592): start (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference:
Config file path: /opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo/peoplenet/config_triton_infer_primary_peoplenet.txt
Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1): gstnvinferserver_impl.cpp(592): start (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference:
Config file path: /opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo/peoplenet/config_triton_infer_primary_peoplenet.txt
[06-Feb-24 19:48:33][INFO] Exiting app

Do you use docker? If so, which docker do you use and how do you start docker? Can you share your command line?

Judging from the error log, triton was not started correctly.

Yes I am using docker and pulling nvcr.io/nvidia/deepstream:6.4-triton-multiarch image

I fixed above issue but I faced another issue

when run
GST_DEBUG=3 python3 run.py --stream_paths /opt/nvidia/deepstream/deepstream-6.4/sources/inference/configs/streams/streams.json --pgie nvinferserver --config /opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo/peoplenet/config_triton_infer_primary_peoplenet.txt

Output

[“/opt/nvidia/deepstream/deepstream-6.4/sources/inference/configs/streams/streams.json”, “nvinferserver”, “/opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo/peoplenet/config_triton_infer_primary_peoplenet.txt”, false, false]
[08-Feb-24 19:48:20][INFO] Creating Pipeline
[08-Feb-24 19:48:20][INFO] Creating Stream mux
[08-Feb-24 19:48:20][INFO] At least one of the sources is live
[08-Feb-24 19:48:20][INFO] Creating Queue 1
[08-Feb-24 19:48:20][INFO] Creating source_bin 0

[08-Feb-24 19:48:20][INFO] Creating source bin
[08-Feb-24 19:48:20][INFO] source-bin-00
[08-Feb-24 19:48:20][INFO] Creating Decode-bin
[08-Feb-24 19:48:20][INFO] Creating PGIE
[08-Feb-24 19:48:20][INFO] Creating Queue 2
[08-Feb-24 19:48:20][INFO] Creating Converter 1
[08-Feb-24 19:48:20][INFO] Creating Queue 3
[08-Feb-24 19:48:20][INFO] Creating Caps filter 1
[08-Feb-24 19:48:20][INFO] Creating Queue 4
[08-Feb-24 19:48:20][INFO] Creating TILER
[08-Feb-24 19:48:20][INFO] Creating Queue 8
[08-Feb-24 19:48:20][INFO] Creating Converter 2
[08-Feb-24 19:48:20][INFO] Creating Queue 9
[08-Feb-24 19:48:20][INFO] Creating OSD
[08-Feb-24 19:48:20][INFO] Creating Queue 10
[08-Feb-24 19:48:20][INFO] Creating FAKE SINK
[08-Feb-24 19:48:20][INFO] 0 : rtsp://10.1.118.105:8554/stream0
[08-Feb-24 19:48:20][INFO] Starting pipeline

[08-Feb-24 19:48:20][INFO] Decodebin child added: src

[08-Feb-24 19:48:20][ERROR] Error: decodebin cannot be found
[08-Feb-24 19:48:20][ERROR] source not found in decodebin child
0:00:00.260336039 17605 0x561435f62ef0 WARN nvinferserver gstnvinferserver_impl.cpp:360:validatePluginConfig: warning: Configuration file batch-size reset to: 1
WARNING: infer_proto_utils.cpp:144 auto-update preprocess.network_format to IMAGE_FORMAT_RGB
INFO: infer_trtis_backend.cpp:218 TrtISBackend id:1 initialized model: peoplenet
0:00:01.014135289 17605 0x561435f62ef0 WARN nvinferserver gstnvinferserver.cpp:412:gst_nvinfer_server_logger: nvinferserver[UID 1]: Warning from allocateResource() <infer_cuda_context.cpp:554> [UID = 1]: Attention !! Tensor pool size larger than max host tensor pool size: 64 Continuing with user settings
Warning: gst-library-error-quark: Configuration file batch-size reset to: 1 (5): gstnvinferserver_impl.cpp(360): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
[tahaluf:17605:0:17654] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xb0)
==== backtrace (tid: 17654) ====
0 0x0000000000042520 __sigaction() ???:0
1 0x0000000000014984 _Unwind_GetDataRelBase() ???:0
2 0x00000000000ada49 __gxx_personality_v0() ???:0
3 0x000000000000afe9 __libunwind_Unwind_Resume() ???:0
4 0x000000000000786d ???() /lib/x86_64-linux-gnu/libproxy.so.1:0
5 0x0000000000010827 px_proxy_factory_get_proxies() ???:0
6 0x0000000000002827 ???() /usr/lib/x86_64-linux-gnu/gio/modules/libgiolibproxy.so:0
7 0x00000000000b2194 g_subprocess_communicate_utf8_finish() ???:0
8 0x00000000000876b4 g_thread_pool_unprocessed() ???:0
9 0x0000000000084a51 g_thread_unref() ???:0
10 0x0000000000094ac3 pthread_condattr_setpshared() ???:0
11 0x0000000000126a40 __xmknodat() ???:0
Segmentation fault (core dumped)

How can I fix this core dump issue?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

I think you didn’t solve the problem.

You can try the following command line to start docker

docker run --gpus all -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.4 nvcr.io/nvidia/deepstream:6.4-triton-multiarch

Then run user_deepstream_python_apps_install.sh to install pyds

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.