Deepstream with Triton

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : NVIDIA GeForce RTX 3090
• DeepStream Version : 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version : 12.2
• NVIDIA GPU Driver Version (valid for GPU only) : 535.104.05
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

using docker container image using deepstream and triton nvcr.io/nvidia/deepstream:6.3-triton-multiarch

when trying to run deepstream-test3
python3 deepstream_test_3.py -i file:///opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264 --pgie nvinferserver -c /opt/nvidia/deepstream/deepstream-6.3/samples/configs/tao_pretrained_models/deepstream_app_source1_peoplenet.txt

output

{‘input’: [‘file:///opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264’], ‘configfile’: ‘/opt/nvidia/deepstream/deepstream-6.3/samples/configs/tao_pretrained_models/deepstream_app_source1_peoplenet.txt’, ‘pgie’: ‘nvinferserver’, ‘no_display’: False, ‘file_loop’: False, ‘disable_probe’: False, ‘silent’: False}
Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source bin
source-bin-00
Creating Pgie

Creating tiler

Creating nvvidconv

Creating nvosd

Creating EGLSink

WARNING: Overriding infer-config batch-size 0 with number of sources 1

Adding elements to Pipeline

Linking elements in the Pipeline

Now playing…
0 : file:///opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264
Starting pipeline

[libprotobuf ERROR /workspace/build/_deps/repo-third-party-build/grpc-repo/src/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format nvdsinferserver.config.PluginControl: 24:1: Extension “application” is not defined or is not an extension of “nvdsinferserver.config.PluginControl”.
0:00:00.156345420 4994 0x34a90f0 WARN nvinferserver gstnvinferserver_impl.cpp:523:start: error: Configuration file parsing failed
0:00:00.156370800 4994 0x34a90f0 WARN nvinferserver gstnvinferserver_impl.cpp:523:start: error: Config file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/tao_pretrained_models/deepstream_app_source1_peoplenet.txt
0:00:00.156400131 4994 0x34a90f0 WARN nvinferserver gstnvinferserver.cpp:518:gst_nvinfer_server_start: error: gstnvinferserver_impl start failed
Error: gst-library-error-quark: Configuration file parsing failed (5): gstnvinferserver_impl.cpp(523): start (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference:
Config file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/tao_pretrained_models/deepstream_app_source1_peoplenet.txt
Exiting app

Appreciate your help to fix the issue

from the error, it is becuase parsing configuration failed. did you modify the configuration file?

No I didn’t edit config file

should I use deepstream_app_source1_peoplenet.txt or config_triton_infer_primary_peoplenet.txt with deepstream-test3

When I used config_triton_infer_primary_peoplenet.txt

using below command
python3 deepstream_test_3.py -i file:///opt/nvidia/deepstr
eam/deepstream-6.3/samples/streams/sample_720p.h264 --pgie nvinferserver -c config_triton_infer_primary_peoplenet.txt

Output

{‘input’: [‘file:///opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264’], ‘configfile’: ‘config_triton_infer_primary_peoplenet.txt’, ‘pgie’: ‘nvinferserver’, ‘no_display’: False, ‘file_loop’: False, ‘disable_probe’: False, ‘silent’: False}
Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source bin
source-bin-00
Creating Pgie

Creating tiler

Creating nvvidconv

Creating nvosd

Creating EGLSink

WARNING: Overriding infer-config batch-size 0 with number of sources 1

Adding elements to Pipeline

Linking elements in the Pipeline

Now playing…
0 : file:///opt/nvidia/deepstream/deepstream-6.3/samples/streams/sample_720p.h264
Starting pipeline

WARNING: infer_proto_utils.cpp:144 auto-update preprocess.network_format to IMAGE_FORMAT_RGB
ERROR: infer_trtis_server.cpp:1057 Triton: failed to load model peoplenet, triton_err_str:Internal, err_msg:failed to load ‘peoplenet’, failed to poll from model repository
ERROR: infer_trtis_backend.cpp:54 failed to load model: peoplenet, nvinfer error:NVDSINFER_TRITON_ERROR
ERROR: infer_trtis_backend.cpp:193 failed to initialize backend while ensuring model:peoplenet ready, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:00.250944153 5925 0x1e750f0 ERROR nvinferserver gstnvinferserver.cpp:408:gst_nvinfer_server_logger: nvinferserver[UID 1]: Error in createNNBackend() <infer_trtis_context.cpp:289> [UID = 1]: failed to initialize triton backend for model:peoplenet, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:00.251884257 5925 0x1e750f0 ERROR nvinferserver gstnvinferserver.cpp:408:gst_nvinfer_server_logger: nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:79> [UID = 1]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:00.251921347 5925 0x1e750f0 WARN nvinferserver gstnvinferserver_impl.cpp:592:start: error: Failed to initialize InferTrtIsContext
0:00:00.251932617 5925 0x1e750f0 WARN nvinferserver gstnvinferserver_impl.cpp:592:start: error: Config file path: config_triton_infer_primary_peoplenet.txt
0:00:00.252034459 5925 0x1e750f0 WARN nvinferserver gstnvinferserver.cpp:518:gst_nvinfer_server_start: error: gstnvinferserver_impl start failed
Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1): gstnvinferserver_impl.cpp(592): start (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference:
Config file path: config_triton_infer_primary_peoplenet.txt
Exiting app

from the error, the app can’t load engine. please prepare engine if using nvinfersever. pleaae refer to “To setup peoplenet model and configs” section in the readme.

I followed readme and I faced below issue

Thanks for your help

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

can you share the output of “ll /opt/nvidia/deepstream/deepstream/samples/triton_model_repo/peoplenet” ?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.