As the compatibility table shown, DS7.1 requires TensorRT 10.3. please follow the guide to install DeepStream.
from the error “Failed to create ‘primary_gie’”, nvinfer/nvinferserver can’t be created. could you share the result of “gst-inspect-1.0 nvinfer” and “gst-inspect-1.0 nvinferserver”?
from the error, the application faild to parse the model. please check the model tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx. if the size is too small, please use “git lfs pull” to download the whole file. Here is the related doc.
You are right. The solution fixed the yolov4 model issue. Anyway, there is an error when proceeding to the bodypose2d model:
./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml
src_ids:0;1;2
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
NVDSMETAMUX_CFG_PARSER: Group ‘user-configs’ ignored
Unknown or legacy key specified ‘is-classifier’ for group [property]
i:0, src_id_num:3
link_streamdemux_to_streammux, srid:0, mux:0
link_streamdemux_to_streammux, srid:1, mux:0
link_streamdemux_to_streammux, srid:2, mux:0
** INFO: <create_primary_gie_bin:147>: gpu-id: 0 in primary-gie group is ignored, only accept in nvinferserver’s config
i:1, src_id_num:3
link_streamdemux_to_streammux, srid:1, mux:1
link_streamdemux_to_streammux, srid:2, mux:1
link_streamdemux_to_streammux, srid:3, mux:1
** INFO: <create_primary_gie_bin:147>: gpu-id: 0 in primary-gie group is ignored, only accept in nvinferserver’s config
i:2, src_id_num:3
link_streamdemux_to_streammux, srid:1, mux:2
link_streamdemux_to_streammux, srid:2, mux:2
link_streamdemux_to_streammux, srid:3, mux:2
WARNING: [TRT]: BatchedNMSPlugin is deprecated since TensorRT 9.0. Use INetworkDefinition::addNMS() to add an INMSLayer OR use EfficientNMS plugin.
0:00:01.409637613 269030 0x55949b6b9090 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:327 [FullDims Engine Info]: layers num: 5
0 INPUT kFLOAT input 3x416x416 min: 1x3x416x416 opt: 4x3x416x416 Max: 4x3x416x416
1 OUTPUT kINT32 num_detections 1 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT nmsed_boxes 1000x4 min: 0 opt: 0 Max: 0
3 OUTPUT kFLOAT nmsed_scores 1000 min: 0 opt: 0 Max: 0
4 OUTPUT kFLOAT nmsed_classes 1000 min: 0 opt: 0 Max: 0
0:00:01.409777161 269030 0x55949b6b9090 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine
0:00:01.422151790 269030 0x55949b6b9090 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/config_yolov4_infer.txt sucessfully
ERROR: Triton: failed to set model repo path: /usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models, triton_err_str:tt1, err_msg:tt1
ERROR: failed to initialize trtserver on repo dir: root: “/usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models”
strict_model_config: true
0:00:01.424756852 269030 0x55949b6b9090 ERROR nvinferserver gstnvinferserver.cpp:405:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 2]: Error in createNNBackend() <infer_trtis_context.cpp:258> [UID = 2]: model:bodypose2d get triton server instance failed. repo:root: “/usr/local/mesh_services/lscpu.us/lscpu.us.agent/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models”
strict_model_config: true
bodypose2d’s bakcend is onnxruntime. please check if the onnxruntime backend is installed according to the trtion server website above. you can use DeepStream triton docker, which already installs all dependcies.
please refer to the my first comment. what is the driver version? please make sure all versions are correct according to this compatibility table. if not, Please follow the guide to install DeepStream.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks.