How to Configure Different checkClassMatch Values for Multiple PGIEs in DeepStream Python?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.3
• NVIDIA GPU Driver Version (valid for GPU only) 530.30.02
• Issue Type( questions, new requirements, bugs) questions

I am working on a DeepStream Python application where I have two PGIEs in the pipeline:

  1. PGIE 1 needs object tracking with checkClassMatch: 0.
  2. PGIE 2 needs object tracking with checkClassMatch: 1.

I want to ensure that the tracker behaves differently for each PGIE, respecting their respective checkClassMatch configurations.

From my understanding, the checkClassMatch property is a global tracker setting, which seems to apply to all PGIEs. How can I configure the pipeline to use different checkClassMatch values for these PGIEs?

Here are my questions:

  • Is it possible to use separate trackers for each PGIE in the same pipeline?
  • Can I dynamically adjust checkClassMatch based on the PGIE output?
  • Is there any other recommended approach to handle this scenario?

Any guidance, examples, or workarounds would be greatly appreciated. Thank you!

“checkClassMatch" is a global parameter.

In my case, I should implement a parallel pipeline where each branch is associated with one PGIE and one tracker. The metadata from these branches should then be concatenated, correct?

The parallel pipeline may help your case. NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel..

Could you provide some diagram for your pipeline? I actually didn’t get how you set up two PGIEs.

Anyways, checkClassMatch is a parameter for the tracker. It’s a global parameter for all the streams processed by the same tracker lib instance. If you want to use different tracker config (e.g., one with checkClassMatch: 0 and the other with checkClassMatch: 1) for different streams, then you can use use multiple sub-batches, because each sub-batch would instantiate a tracker lib. So, you can use different tracker config for different sub-batch.

In fact, I would recommend to use multi-config pipeline, where in the same pipeline, you have two data paths: One for PGIE 1 + Tracker 1, and the other for PGIE 2 + Tracker 2. That way, you can use different tracker config for different PGIEs.

Thank you in advance for your assistance. I am unclear about what “multiple sub-batches” refers to in your context. My current pipeline has been adapted from a “parallel inference pipeline,” which I believe aligns with your recommendation. I am able to use different tracker configurations for various PGIEs. However, the issue I am facing is that memory usage has significantly increased compared to a typical sequential inference pipeline. Could you provide some guidance on optimizing processing time and memory usage when using a “parallel inference pipeline”?

From the pipeline graph of the “parallel inference pipeline” just save some video decoding efforts compared to run several separated inferencing pipelines to meet your requirement. The high memory usage and long processing time may be the side effects.

Have you measured the GPU loading by “nvidia-smi dmon” When your run the parallel pipeline?