Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.2 Triton
• TensorRT Version 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only) Driver Version: 525.85.05 CUDA Version: 12.0
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
- Use old nvstreammux, set batch size > 1
- Use multisrcbin, set max batch size > 1
I am using a pipeline with LSTM model with dynamic batching. Since I am using a statefull model, I need to disable batching of frames from the same stream, but allow batching of frames from several streams.
I currently cannot find how to perform this.
Any insights?
Guy