Creating parallel infer bin failed

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**Orin 64G Development kit
• DeepStream Version7.0
**• JetPack Version (valid for Jetson only)**Version: 6.0+b87
• TensorRT Version 8.6.2.3-1+cuda12.2
I am testing deepstream_parallel_inference_app.
All installation and building the app finished successfully.
Run the following command has errors.

./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml

The errors are

atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/deepstream_parallel_inference_app/tritonclient/sample$ ./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml
src_ids:0;1;2
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
src_ids:1;2;3
Unknown key enable-batch-process for tracker
Unknown key enable-past-frame for tracker
NVDSMETAMUX_CFG_PARSER: Group 'user-configs' ignored
** ERROR: <create_primary_gie_bin:119>: Failed to create 'primary_gie'
** ERROR: <create_primary_gie_bin:183>: create_primary_gie_bin failed
** ERROR: <create_parallel_infer_bin:1200>: create_parallel_infer_bin failed
creating parallel infer bin failed
Quitting
App run successful

All onnx models in yolov4, bodypose2d and trafficcamnet folders are in order according to configs files.
Why creating parallel infer bin failed?

Just from the log you posted, the high probability is a problem with your system environment. The source code is as below. The gstreamer bin element cannot be created on your system. Please set up your system by referring to jetson-setup step by step.

gboolean
create_primary_gie_bin (NvDsGieConfig * config, NvDsPrimaryGieBin * bin)
{
  gboolean ret = FALSE;
  gst_nvinfer_raw_output_generated_callback out_callback =
      write_infer_output_to_file;

  bin->bin = gst_bin_new ("primary_gie_bin");
  if (!bin->bin) {
    NVGSTDS_ERR_MSG_V ("Failed to create 'primary_gie_bin'");
    goto done;
  }

I tested whether my deepstream has issue using deepstream-app.
My deepstream-app can run peoplenet successfully. So I can say that deepstream sdk installation and system environment are fine. Because I can create primary_gie in deepstream-app.

Is the issuse coming from setting up the deepstream_parallel_inference_app, but I had no errors in setting up process.

Could you update your DeepStream to the latest version 7.1 and Jetpack to 6.1? We migrated this project on the latest version deepstream_parallel_inference_app.

I have tried on deepstream 7.1 and Jetpack 6.1.
The issue here is running deepstream_parallel_inference_app on deepstream 7.1 and Jetpack 6.1.

deepstream_parallel_inference_app is looking for lib version 8.0. The system has all library version in 10.0

The link you used for this demo is wrong for DeepStream 7.1, can you use the link I attached before?

Ok i’ll try. i’ll let you know.

You see, I have this issue using Deepstream 7.1 and Jetpack 6.1.

This is my newly installed jetpack.

atic@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_parallel_inference_app/tritonserver$ sudo apt-cache show nvidia-jetpack
[sudo] password for atic: 
Package: nvidia-jetpack
Source: nvidia-jetpack (6.1)
Version: 6.1+b123

Then I used this link (the url mentioned in your reply) to get deepstream_parallel_inference_app.

Then when run ./build_engine.sh, I have errors as

[01/13/2025-12:57:25] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v100300] # trtexec --onnx=./models/peoplenet/1/resnet34_peoplenet_int8.onnx --int8 --calib=./models/peoplenet/1/resnet34_peoplenet_int8.txt --saveEngine=./models/peoplenet/1/resnet34_peoplenet_int8.onnx_b8_gpu0_int8.engine --minShapes=input_1:0:1x3x544x960 --optShapes=input_1:0:8x3x544x960 --maxShapes=input_1:0:8x3x544x960
Building Model Secondary_CarMake...
./tao-converter: error while loading shared libraries: libnvinfer.so.8: cannot open shared object file: No such file or directory
Building Model Secondary_VehicleTypes...
./tao-converter: error while loading shared libraries: libnvinfer.so.8: cannot open shared object file: No such file or directory

All libs inside /usr/lib/aarch64-linux-gnu are version 10. Pls see below screenshot.

How to solve issue?
I have another post here and you said installation issue at SDKManager. Actually no issue in installation.

I have tried that on my Orin with Jetpack 6.1 and DeepStream 7.1 and it works well.
If you are using our latest DeepStream version and the latest build_engine.sh sample. There are no such command you attached above.

trtexec --onnx=./models/peoplenet/1/resnet34_peoplenet_int8.onnx ...

The latest command is like below.

trtexec --onnx=./models/peoplenet/1/resnet34_peoplenet.onnx ...

Could you double check which repository you are using, or have you modified our source code yourself?

This is the link I do git for the code.
We both can see that the link is latest or not latest. It is github link.
You can see that it has _int8.

Can give me the correct link? Pls check whether this resnet34_peoplenet.onnx_b8_gpu0 is correct.

I think your git command is wrong and giving me not latest code.

git clone https://github.com/NVIDIA-AI-IOT/deepstream_parallel_inference_app.git

Thanks now it works. I am using git command to download.
Now manually download from the github and it worked.