Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
I have a single DeepStream pipeline processing multiple streams. Say there are 8 streams and each of them has its own customized NN model. How can I run these in parallel?
Option 1: Create a DeepStream pipeline with 8 separate nvstreammux and 8 separate nvinfer modules. Connect each stream to its own nvstreammux.
Option 2: Somehow use triton server to do this.
Option 3: Run 8 separate DeepStream pipelines.
Which of the above are workable? Which is the best option? Are there any examples which demonstrate this separate-NN-models use case?