Deepstream parallel app inference

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)
• DeepStream Docker Version 6.2.0

Repo: GitHub - NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel.

I am running with this example ./apps/deepstream-parallel-infer/deepstream-parallel-infer -c configs/apps/vehicle0_lpr_analytic/source4_1080p_dec_parallel_infer.yml and got this error

Do you guys have any ideas to fix this?
Thanks

Not sure if it is related with your setup or installation, can deepstram-app samples run well on your host?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

It is important to provide detailed information for your issue.

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.