With deepstream_parallel_inference_app in python multiple branch is nor working

Please provide complete information as applicable to your setup.

**• Hardware Platform (GPU)----. 3070
**• DeepStream Version -------> 6.4
• TensorRT Version ------> 8.6
**• NVIDIA GPU Driver Version (valid for GPU only)---------> 545

This is my previous topic where I explain about from 6.2 onwards parallel branch is not working

Please help me out about this I’m attaching my pipeline image If any element need to change or something Please fill free to advice !!!

The c/c++ sample works. Please make sure your python app is aligned with the sample.

Can you share with me deepstream parallel pipeline picture ? Or Can you check once my pipeline and say to me any element need to change or something I need to change ?

I am trying to understand why It’s not working in 6.2 and above but Working with 6.1.1 … same code no change !
***** PYTHON

If you can share something Where I can find out something new make it done !

You can get the graph by the method described here: DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

There is some changes from DeepStream 6.1.1 to DeepStream 6.2. Please follow the sample code. NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. (github.com)

I know how to print the graph But I was getting some error which running So I don’t want to invest time to solve in c++ coz I am using python as a programming language and replicating same kind of pipeline. But If You share that deepstream parallel pipeline with graph, coz you said You ran into it and it was working for you ! It will ease for me to compare both pipeline and debug mine as well.
@Fiona.Chen

Here is the graph I captured . I will delete three days later. play.png - Google Drive

Thanks I downloaded Let me compare both and try to do my best And get back to you ASAP .
Thanks !