Failed to parse ONNX model from file

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**Orin
• DeepStream Version7.1
**• JetPack Version (valid for Jetson only)**6.1
• TensorRT Version10.3.0.30-1+cuda12.5
To test this deepstream_parallel_inference_app, and the following command is used.

./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml

But there is error as Failed to parse ONNX model from file.
What could be wrong?
The whole complete errors are

atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample$ ./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml
src_ids:0;1;2
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
NVDSMETAMUX_CFG_PARSER: Group 'user-configs' ignored
Unknown or legacy key specified 'is-classifier' for group [property]
i:0, src_id_num:3
link_streamdemux_to_streammux, srid:0, mux:0
link_streamdemux_to_streammux, srid:1, mux:0
link_streamdemux_to_streammux, srid:2, mux:0
** INFO: <create_primary_gie_bin:147>: gpu-id: 0 in primary-gie group is ignored, only accept in nvinferserver's config
i:1, src_id_num:3
link_streamdemux_to_streammux, srid:1, mux:1
link_streamdemux_to_streammux, srid:2, mux:1
link_streamdemux_to_streammux, srid:3, mux:1
** INFO: <create_primary_gie_bin:147>: gpu-id: 0 in primary-gie group is ignored, only accept in nvinferserver's config
i:2, src_id_num:3
link_streamdemux_to_streammux, srid:1, mux:2
link_streamdemux_to_streammux, srid:2, mux:2
link_streamdemux_to_streammux, srid:3, mux:2
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/../../../../tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine open error
0:00:00.190658644 10483 0xaaaada764a90 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/../../../../tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine failed
0:00:00.190700820 10483 0xaaaada764a90 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/../../../../tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine failed, try rebuild
0:00:00.190714324 10483 0xaaaada764a90 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: ModelImporter.cpp:914: Failed to parse ONNX model from file: /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx!
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:01.973031684 10483 0xaaaada764a90 ERROR                nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
0:00:02.273838834 10483 0xaaaada764a90 ERROR                nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2213> [UID = 1]: build backend context failed
0:00:02.273898514 10483 0xaaaada764a90 ERROR                nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:00:02.274491058 10483 0xaaaada764a90 WARN                 nvinfer gstnvinfer.cpp:914:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:02.280780908 10483 0xaaaada764a90 WARN                 nvinfer gstnvinfer.cpp:914:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/config_yolov4_infer.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
**PERF: 0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	
ERROR from element primary_gie: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(914): gst_nvinfer_start (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstBin:parallel_infer_bin/GstBin:primary_gie_0_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/config_yolov4_infer.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Quitting
Returned, stopping playback
Deleting pipeline
App run successful

The model is located in the folder. But parsing is failed.

This is because you did not install git-lfs, Please follow the steps in README

Please install git-lfs first, then git clone. If you have already cloned, please execute git lfs pull in the repository

sudo apt install git-lfs
git lfs install --skip-repo
git lfs pull

I have recorded all steps I did.
The summary is that if I use git command, I have errors as
./tao-converter: error while loading shared libraries: libnvinfer.so.8: cannot open shared object file: No such file or directory. at ./build_engine.sh
Even though I am using deepstream7.1 and jetpack6.1, why it is looking for libnvinfer.so.8.

If I manually download zip for the whole deepstream_reference_apps and follow instructions as

cd tritonserver/
./build_engine.sh
cd tritonclient/sample/
source build.sh

I can do successfully.
But the error occurred at running the application as
./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml

Failed to parse ONNX model from file: /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps-master/deepstream_parallel_inference_app/tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx!

That is summary.

So step by step for using git is as follows. You can see errors at the end.

atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps$ sudo apt install git-lfs
[sudo] password for atic: 
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
git-lfs is already the newest version (3.0.2-1ubuntu0.3).
0 upgraded, 0 newly installed, 0 to remove and 207 not upgraded.
atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps$ git lfs install --skip-repo
Git LFS initialized.
atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps$ git lfs install --skip-repo
Git LFS initialized.
atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_parallel_inference_app.git
Cloning into 'deepstream_parallel_inference_app'...
remote: Enumerating objects: 276, done.
remote: Counting objects: 100% (276/276), done.
remote: Compressing objects: 100% (145/145), done.
remote: Total 276 (delta 139), reused 231 (delta 106), pack-reused 0 (from 0)
Receiving objects: 100% (276/276), 484.18 KiB | 3.75 MiB/s, done.
Resolving deltas: 100% (139/139), done.
Filtering content: 100% (3/3), 324.84 MiB | 7.09 MiB/s, done.
atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps$ cd deepstream_parallel_inference_app/
atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_parallel_inference_app$ git lfs pull
atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_parallel_inference_app$ 
atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_parallel_inference_app$ cd tritonserver/
atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_parallel_inference_app/tritonserver$ ./build_engine.sh
ERROR
[01/14/2025-15:14:54] [I] Finished parsing network model. Parse time: 0.025963
[01/14/2025-15:14:54] [I] Set shape of input tensor input_1:0 for optimization profile 0 to: MIN=1x3x544x960 OPT=8x3x544x960 MAX=8x3x544x960
[01/14/2025-15:14:54] [I] FP32 and INT8 precisions have been specified - more performance might be enabled by additionally specifying --fp16 or --best
[01/14/2025-15:14:54] [I] Set calibration profile for input tensor input_1:0 to 8x3x544x960
[01/14/2025-15:14:54] [W] [TRT] DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
[01/14/2025-15:14:54] [I] [TRT] Calibration table does not match calibrator algorithm type.
[01/14/2025-15:14:54] [I] [TRT] Perform graph optimization on calibration graph.
[01/14/2025-15:14:54] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[01/14/2025-15:15:03] [I] [TRT] Detected 1 inputs and 2 output network tensors.
[01/14/2025-15:15:04] [I] [TRT] Total Host Persistent Memory: 209552
[01/14/2025-15:15:04] [I] [TRT] Total Device Persistent Memory: 1936896
[01/14/2025-15:15:04] [I] [TRT] Total Scratch Memory: 4608
[01/14/2025-15:15:04] [I] [TRT] [BlockAssignment] Started assigning block shifts. This will take 285 steps to complete.
[01/14/2025-15:15:05] [I] [TRT] [BlockAssignment] Algorithm ShiftNTopDown took 23.2011ms to assign 4 blocks to 285 nodes requiring 359301120 bytes.
[01/14/2025-15:15:05] [I] [TRT] Total Activation Memory: 359301120
[01/14/2025-15:15:05] [I] [TRT] Total Weights Memory: 15993392
[01/14/2025-15:15:05] [I] [TRT] Engine generation completed in 10.2474 seconds.
[01/14/2025-15:15:05] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +345, now: CPU 0, GPU 362 (MiB)
[01/14/2025-15:15:05] [I] [TRT] Starting Calibration.
[01/14/2025-15:15:05] [E] Error[3]: IExecutionContext::executeV2: Error Code 3: API Usage Error (Parameter check failed, condition: nullPtrAllowed. Tensor "input_1:0" is bound to nullptr, which is allowed only for an empty input tensor, shape tensor, or an output tensor associated with an IOuputAllocator.)
[01/14/2025-15:15:05] [E] Error[2]: [calibrator.cpp::calibrateEngine::1236] Error Code 2: Internal Error (Assertion context->executeV2(bindings.data()) failed. )
[01/14/2025-15:15:05] [E] Engine could not be created from network
[01/14/2025-15:15:05] [E] Building engine failed
[01/14/2025-15:15:05] [E] Failed to create engine from model or file.
[01/14/2025-15:15:05] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v100300] # trtexec --onnx=./models/peoplenet/1/resnet34_peoplenet_int8.onnx --int8 --calib=./models/peoplenet/1/resnet34_peoplenet_int8.txt --saveEngine=./models/peoplenet/1/resnet34_peoplenet_int8.onnx_b8_gpu0_int8.engine --minShapes=input_1:0:1x3x544x960 --optShapes=input_1:0:8x3x544x960 --maxShapes=input_1:0:8x3x544x960
Building Model Secondary_CarMake...
./tao-converter: error while loading shared libraries: libnvinfer.so.8: cannot open shared object file: No such file or directory
Building Model Secondary_VehicleTypes...
./tao-converter: error while loading shared libraries: libnvinfer.so.8: cannot open shared object file: No such file or directory
Finished generating engine files.

For manual download and build application, all are fine.
The error is at running the application as

./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml

The errors are

atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps-master/deepstream_parallel_inference_app/tritonclient/sample$ ./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml

(gst-plugin-scanner:15293): GStreamer-WARNING **: 15:44:27.872: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
src_ids:0;1;2
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
NVDSMETAMUX_CFG_PARSER: Group 'user-configs' ignored
Unknown or legacy key specified 'is-classifier' for group [property]
i:0, src_id_num:3
link_streamdemux_to_streammux, srid:0, mux:0
link_streamdemux_to_streammux, srid:1, mux:0
link_streamdemux_to_streammux, srid:2, mux:0
** INFO: <create_primary_gie_bin:147>: gpu-id: 0 in primary-gie group is ignored, only accept in nvinferserver's config
i:1, src_id_num:3
link_streamdemux_to_streammux, srid:1, mux:1
link_streamdemux_to_streammux, srid:2, mux:1
link_streamdemux_to_streammux, srid:3, mux:1
** INFO: <create_primary_gie_bin:147>: gpu-id: 0 in primary-gie group is ignored, only accept in nvinferserver's config
i:2, src_id_num:3
link_streamdemux_to_streammux, srid:1, mux:2
link_streamdemux_to_streammux, srid:2, mux:2
link_streamdemux_to_streammux, srid:3, mux:2
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps-master/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/../../../../tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine open error
0:00:01.748135540 15292 0xaaaaf4d298a0 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps-master/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/../../../../tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine failed
0:00:01.748191796 15292 0xaaaaf4d298a0 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps-master/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/../../../../tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx_b4_gpu0_fp16.engine failed, try rebuild
0:00:01.748209684 15292 0xaaaaf4d298a0 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
ERROR: [TRT]: ModelImporter.cpp:914: Failed to parse ONNX model from file: /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps-master/deepstream_parallel_inference_app/tritonserver/models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx!
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:03.932436969 15292 0xaaaaf4d298a0 ERROR                nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
0:00:04.226428581 15292 0xaaaaf4d298a0 ERROR                nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2213> [UID = 1]: build backend context failed
0:00:04.226492326 15292 0xaaaaf4d298a0 ERROR                nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:00:04.233191465 15292 0xaaaaf4d298a0 WARN                 nvinfer gstnvinfer.cpp:914:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:04.233222985 15292 0xaaaaf4d298a0 WARN                 nvinfer gstnvinfer.cpp:914:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps-master/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/config_yolov4_infer.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
**PERF: 0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	
ERROR from element primary_gie: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(914): gst_nvinfer_start (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstBin:parallel_infer_bin/GstBin:primary_gie_0_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps-master/deepstream_parallel_inference_app/tritonclient/sample/configs/yolov4/config_yolov4_infer.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Quitting
Returned, stopping playback
Deleting pipeline
App run successful
atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps-master/deepstream_parallel_inference_app/tritonclient/sample$

Jetpack 6.1 no longer supports tao-convert and etlt model. In addition, deepstream_parallel_inference_app has been moved to the repository, which has been prompted in the README

Use deepstream_parallel_inference_app in deepstream_reference_apps repository

This is for the legacy version.

Finally, it works now.
Those facing the same issue, pls use git and clone the whole deepstream_reference_app and check all are 100% downloaded.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.