GPU: Tesla T4
deepstream version : 6.1.1
Driver Version: 515.65.01
I test the deepstream_parallel_inference_app code under docker images: deepstream:6.1.1-triton, but can not run successfully.
Here is my steps:
I start a docker container by the command:
docker run -it --gpus all --shm-size 12g -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream:6.1.1-triton
Please, Anyone can explain about what Nvidia did in C++ , what is the pipeline they add step by step … If we want to create a python application … how we will go ahead and do the same things with python, !!!
Also, I did confused on the folder position of deepstream_parallel_inference_app. Now I put it at : /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps.
Also, I have check the detail of source4_1080p_dec_parallel_infer.yml which used in the command line，and I found some files missing, details as shown below:
Yes, I run the command as per indicated in the build_engine.sh，you may find the full comand I use at the bottom of the snapshot.
And the missing engine file shoule be created at the build_engine.sh step，then we can avoid the error of yolov4 engine when running the deepstream_parallel-inference-app :