Running error in deepstream_parallel_inference_app

Base information:
GPU: Tesla T4
deepstream version : 6.1.1
Driver Version: 515.65.01
I test the deepstream_parallel_inference_app code under docker images: deepstream:6.1.1-triton, but can not run successfully.
Here is my steps:

  1. I start a docker container by the command:
    docker run -it --gpus all --shm-size 12g -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream:6.1.1-triton
  2. run the following command step by step:
    apt update
    ./user_additional_install.sh
    apt install git-lfs
    git lfs install --skip-repo
    cd sources/apps/sample_apps
    git clone GitHub - NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel.
    cd deepstream_parallel_inference_app/
    cd tritonserver/
    ./build_engine.sh
    cd …/tritonclient/sample/
    source build.sh
  3. test the deepstream_parallel_inference_app code:
    ./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml

and error found:


Hope if anyone could help me to clarify this problem, thanks in advance!

Did you change anything of the code? It looks like your config file path is not right.

/opt/nvidia/.........../configs/yolov4

There should be only one yolov4 folder. You can confirm it.

Please, Anyone can explain about what Nvidia did in C++ , what is the pipeline they add step by step … If we want to create a python application … how we will go ahead and do the same things with python, !!!

Thanks for your reply.
Yes, I found my error on setting double yolov4 folder. And I have revised it but found another error in engine file of yolov4. I show details as below:


Also, I did confused on the folder position of deepstream_parallel_inference_app. Now I put it at : /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps.
Also, I have check the detail of source4_1080p_dec_parallel_infer.yml which used in the command line,and I found some files missing, details as shown below:



I also rechecked my steps, and found error during runing build_engine.sh:

Please open a new topic about your issue. Thanks

Could you try to use the command below in the build_engine.sh directly? You should make sure that there are onnx under the path first.

trtexec --fp16 --onnx=./models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx.nms.onnx --saveEngine=./models/yolov4/1/yolov4_-1_3_416_416_dynamic.onnx_b32_gpu0.engine  --minShapes=input:1x3x416x416 --optShapes=input:16x3x416x416 --maxShapes=input:32x3x416x416 --shapes=input:16x3x416x416 --workspace=10000

Yes, I run the command as per indicated in the build_engine.sh,you may find the full comand I use at the bottom of the snapshot.
And the missing engine file shoule be created at the build_engine.sh step,then we can avoid the error of yolov4 engine when running the deepstream_parallel-inference-app :


Did you confirm the following files in your relevant path:./models/yolov4/1/ ?

yolov4_-1_3_416_416_dynamic.onnx.nms.onnx

Yes. I confirm.

Could you show the size of the model? Or you can replace it with the following model:
https://drive.google.com/drive/folders/18TXX3c7_Of16zVeWrfCkyT4ooVMz44oW

Thanks !!
Yes, the problem is the model download failed… This morning, I just run the code successfully…

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.