Parallel Inference example in DeepStream

We are excited to bring support for the parallel multiple models in DeepStream. We have published the parallel multiple models sample application on GitHub (GitHub - NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel.). On the GitHub we have provided instructions to set up the pipeline with multiple PGIE+SGIE branches with deepstream-app style configuration files and the new gst-nvdsmetamux plugin.
If you have issues running this example, please file an issue on the forum. We will be happy to assist.