Nvidia Deepstream 7.1 deepstream_parallel_inference_app execution issue

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)
• DeepStream Version 7.1
• TensorRT Version 10.1
• NVIDIA GPU Driver Version (valid for GPU only) v12.8
• Issue Type( bugs )
• How to reproduce the issue ? Follow the instructions of this link deepstream_reference_apps/deepstream_parallel_inference_app at master · NVIDIA-AI-IOT/deepstream_reference_apps

Here’s the logs:

Command: ./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml

Terminal:
src_ids:0;1;2
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
NVDSMETAMUX_CFG_PARSER: Group ‘user-configs’ ignored
Unknown or legacy key specified ‘is-classifier’ for group [property]
i:0, src_id_num:3
link_streamdemux_to_streammux, srid:0, mux:0
link_streamdemux_to_streammux, srid:1, mux:0
link_streamdemux_to_streammux, srid:2, mux:0
** ERROR: <create_primary_gie_bin:122>: Failed to create ‘primary_gie’
** ERROR: <create_primary_gie_bin:186>: create_primary_gie_bin failed
** ERROR: <create_parallel_infer_bin:1277>: create_parallel_infer_bin failed
creating parallel infer bin failed
Quitting
App run successful

How to resolve this?

  1. As the compatibility table shown, DS7.1 requires TensorRT 10.3. please follow the guide to install DeepStream.
  2. from the error “Failed to create ‘primary_gie’”, nvinfer/nvinferserver can’t be created. could you share the result of “gst-inspect-1.0 nvinfer” and “gst-inspect-1.0 nvinferserver”?

It was found that no such element “nvinferserver” and the Triton Inference Server is not yet installed, so I followed the instructions in this link: triton-inference-server/core: The core library and APIs implementing the Triton Inference Server.

I run the sample application again after completing the installation and the logs are shown below:

Command: ./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml

Terminal:
src_ids:0;1;2
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
NVDSMETAMUX_CFG_PARSER: Group ‘user-configs’ ignored
Unknown or legacy key specified ‘is-classifier’ for group [property]
i:0, src_id_num:3
link_streamdemux_to_streammux, srid:0, mux:0
link_streamdemux_to_streammux, srid:1, mux:0
link_streamdemux_to_streammux, srid:2, mux:0
** INFO: <create_primary_gie_bin:147>: gpu-id: 0 in primary-gie group is ignored, only accept in nvinferserver’s config
i:1, src_id_num:3
link_streamdemux_to_streammux, srid:1, mux:1
link_streamdemux_to_streammux, srid:2, mux:1
link_streamdemux_to_streammux, srid:3, mux:1
** INFO: <create_primary_gie_bin:147>: gpu-id: 0 in primary-gie group is ignored, only accept in nvinferserver’s config
i:2, src_id_num:3
link_streamdemux_to_streammux, srid:1, mux:2
link_streamdemux_to_streammux, srid:2, mux:2
link_streamdemux_to_streammux, srid:3, mux:2
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1152 Deserialize engine failed because file path: /usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/../../../../tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine open error
0:00:00.741894139 267986 0x55d19bf17a90 WARN nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 1]: deserialize engine from file :/usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/../../../../tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine failed
0:00:00.741946248 267986 0x55d19bf17a90 WARN nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 1]: deserialize backend context from engine from file :/usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/../../../../tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine failed, try rebuild
0:00:00.741967528 267986 0x55d19bf17a90 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: (deserializeOnnxModel): MODEL_DESERIALIZE_FAILED: Assertion failed: model->ParseFromCodedStream(&codedInput): Failed to parse the ONNX model.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:54 Failed to parse onnx file
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:673 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:518 failed to build network.
0:00:05.285939524 267986 0x55d19bf17a90 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
0:00:05.352821093 267986 0x55d19bf17a90 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2213> [UID = 1]: build backend context failed
0:00:05.353019371 267986 0x55d19bf17a90 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:00:05.356797061 267986 0x55d19bf17a90 WARN nvinfer gstnvinfer.cpp:914:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:05.357247586 267986 0x55d19bf17a90 WARN nvinfer gstnvinfer.cpp:914:gst_nvinfer_start:<primary_gie> error: Config file path: /usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/config_yolov4_infer.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
**PERF: 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
ERROR from element primary_gie: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(914): gst_nvinfer_start (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstBin:parallel_infer_bin/GstBin:primary_gie_0_bin/GstNvInfer:primary_gie:
Config file path: /usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/config_yolov4_infer.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Quitting
Returned, stopping playback
Deleting pipeline
App run successful

What else is missing?

from the error, the application faild to parse the model. please check the model tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx. if the size is too small, please use “git lfs pull” to download the whole file. Here is the related doc.

You are right. The solution fixed the yolov4 model issue. Anyway, there is an error when proceeding to the bodypose2d model:

./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml
src_ids:0;1;2
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
NVDSMETAMUX_CFG_PARSER: Group ‘user-configs’ ignored
Unknown or legacy key specified ‘is-classifier’ for group [property]
i:0, src_id_num:3
link_streamdemux_to_streammux, srid:0, mux:0
link_streamdemux_to_streammux, srid:1, mux:0
link_streamdemux_to_streammux, srid:2, mux:0
** INFO: <create_primary_gie_bin:147>: gpu-id: 0 in primary-gie group is ignored, only accept in nvinferserver’s config
i:1, src_id_num:3
link_streamdemux_to_streammux, srid:1, mux:1
link_streamdemux_to_streammux, srid:2, mux:1
link_streamdemux_to_streammux, srid:3, mux:1
** INFO: <create_primary_gie_bin:147>: gpu-id: 0 in primary-gie group is ignored, only accept in nvinferserver’s config
i:2, src_id_num:3
link_streamdemux_to_streammux, srid:1, mux:2
link_streamdemux_to_streammux, srid:2, mux:2
link_streamdemux_to_streammux, srid:3, mux:2
WARNING: [TRT]: BatchedNMSPlugin is deprecated since TensorRT 9.0. Use INetworkDefinition::addNMS() to add an INMSLayer OR use EfficientNMS plugin.
0:00:01.409637613 269030 0x55949b6b9090 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:327 [FullDims Engine Info]: layers num: 5
0 INPUT kFLOAT input 3x416x416 min: 1x3x416x416 opt: 4x3x416x416 Max: 4x3x416x416
1 OUTPUT kINT32 num_detections 1 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT nmsed_boxes 1000x4 min: 0 opt: 0 Max: 0
3 OUTPUT kFLOAT nmsed_scores 1000 min: 0 opt: 0 Max: 0
4 OUTPUT kFLOAT nmsed_classes 1000 min: 0 opt: 0 Max: 0

0:00:01.409777161 269030 0x55949b6b9090 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine
0:00:01.422151790 269030 0x55949b6b9090 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/config_yolov4_infer.txt sucessfully
ERROR: Triton: failed to set model repo path: /usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models, triton_err_str:tt1, err_msg:tt1
ERROR: failed to initialize trtserver on repo dir: root: “/usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models”
strict_model_config: true

0:00:01.424756852 269030 0x55949b6b9090 ERROR nvinferserver gstnvinferserver.cpp:405:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 2]: Error in createNNBackend() <infer_trtis_context.cpp:258> [UID = 2]: model:bodypose2d get triton server instance failed. repo:root: “/usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models”
strict_model_config: true

0:00:01.424782702 269030 0x55949b6b9090 ERROR nvinferserver gstnvinferserver.cpp:405:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 2]: Error in initialize() <infer_base_context.cpp:80> [UID = 2]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:01.424804632 269030 0x55949b6b9090 WARN nvinferserver gstnvinferserver_impl.cpp:597:start:<primary_gie> error: Failed to initialize InferTrtIsContext
0:00:01.424818261 269030 0x55949b6b9090 WARN nvinferserver gstnvinferserver_impl.cpp:597:start:<primary_gie> error: Config file path: /usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/bodypose2d/config_body2_inferserver.txt
0:00:01.424880621 269030 0x55949b6b9090 WARN nvinferserver gstnvinferserver.cpp:515:gst_nvinfer_server_start:<primary_gie> error: gstnvinferserver_impl start failed
Running…
**PERF: 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)
ERROR from element primary_gie: Failed to initialize InferTrtIsContext
Error details: gstnvinferserver_impl.cpp(597): start (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstBin:parallel_infer_bin/GstBin:primary_gie_1_bin/GstNvInferServer:primary_gie:
Config file path: /usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/bodypose2d/config_body2_inferserver.txt
Quitting
Returned, stopping playback
Deleting pipeline
App run successful

bodypose2d’s bakcend is onnxruntime. please check if the onnxruntime backend is installed according to the trtion server website above. you can use DeepStream triton docker, which already installs all dependcies.

please refer to the my first comment. what is the driver version? please make sure all versions are correct according to this compatibility table. if not, Please follow the guide to install DeepStream.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.