Running separate models for each stream in parallel in DeepStream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
I have a single DeepStream pipeline processing multiple streams. Say there are 8 streams and each of them has its own customized NN model. How can I run these in parallel?

Option 1: Create a DeepStream pipeline with 8 separate nvstreammux and 8 separate nvinfer modules. Connect each stream to its own nvstreammux.

Option 2: Somehow use triton server to do this.

Option 3: Run 8 separate DeepStream pipelines.

Which of the above are workable? Which is the best option? Are there any examples which demonstrate this separate-NN-models use case?

i work with options 3, and use containers

Option 4: Multi-camera Jetson TX2 AI Media Server demo | NVIDIA GTC 2020 RidgeRun demo

Option 3 works

Thanks @PhongNT @rsc44 @Fiona.Chen .