Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 6.1 • TensorRT Version 8.2 • NVIDIA GPU Driver Version (valid for GPU only) 515 • Issue Type( questions, new requirements, bugs) question
I have read the Parallel Inference example in DeepStream with the same nvstreammux for all models. However, in my case, I use 2 models, each of which requires different input_shape, so I think it is required to use two nvstreammux so that each model can get the right input_shape. How do you think or do you have any recommendation?
Thank you for your reply. Even though DeepStream supports python bindings, I think it is still not clear how to build up the pipeline with streamux → tee → multiple nvstreamdemux → nvstreamux → … → metamux with python binding.