Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): jetson orin NX • DeepStream Version:6.3 • JetPack Version (valid for Jetson only):5.1.3
When I create the following pipeline and run it, Why do I add probe functions at the src pad of Metamux, and the output inference results are repeated? Why are the models configured for my two PGIE completely different?
I have tried to modify my pipeline, as shown in the attachment. Can this avoid the problem of handling the same objectmeta during parallel inference of the model?
1. Sorry, the pipeline file for the question I initially raised seems to be incorrect. I followed it with TEE after Streammux and then processed it in parallel with two models.
The pipeline I modified was followed by two streammux after tee, and then model processed separately;
**2.**What I understand is that NvDsBatchMeta data is created after the Streammux plugin and processed in parallel without affecting each other. Is that correct for my understanding?
**3.**I developed a multi model inference pipeline based on this example. I don’t have multiple source inputs, I just want to achieve fast parallel inference of images, without considering serialization, because efficiency needs to be taken into account. Do you have any suggestions? How should I handle it?
This pipeline works, but you shouldn’t use metamux, which merges different source IDs/meta data. For your pipeline, I think nvinfer → nvinfer can also achieve the goal.
It is a business requirement, hoping to enable these two models to process in parallel and reason as quickly as possible, but may I ask why this pipeline cannot use Metamux? Isn’t the function of metamux to synchronize analysis results
About parallel infer and tee, please refer to this topic.
Tee does not mean copy, “tee” just shares the batched buffer between branches. If you want to use parallel inference and metamux at the same time, please refer to the pipeline below.