Please provide complete information as applicable to your setup.
**• Hardware Platform ** GPU ubuntu 20.04
• DeepStream Version 6.3
• TensorRT Version 8.6.1.6
• Issue Type( questions, new requirements, bugs) questions
Hi, I’m writing some code by C++ for testing deepstream parallel inferencing application.The main purpose is to test the nvdsmetamux plugin.
•aim: I want to implement multi-channel video inference through parallel pipelines.
•questions: unable to dispay video via nvmultistreamtiler, unable to get any data or print any logs in probe function of nvdsmetamux’s src pad(just as described in 273940 this means that the application is hung here). All these problems occurred without any program errors.
[property]
enable=1
# sink pad name which data will be pass to src pad.
active-pad=sink_0
# default pts-tolerance is 60 ms.
pts-tolerance=60000
[user-configs]
[group-0]
src-ids-model-1=0;1;2;3
src-ids-model-2=0;1;2;3
# src-ids-model-<model unique ID>=<source ids>
# mux all source if don't set it.
the following pipeline runs completely normally. pipe15.zip (288.8 KB)
here, single pipeline is running completely normally.the src-ids-model-x property of nvdsmetamux here also performs correctly.
Sorry for the late reply. I tried to test the official demo( deepstream_parallel_inference_app) , but encountered an error after run it. without any change.
It could be your triton environment issue. Are you using our docker based on triton, like nvcr.io/nvidia/deepstream-l4t:7.0-triton-multiarch? Could you try to update your DeepStream to our latest version or there may also be compatibility issues.