Please provide complete information as applicable to your setup. • Hardware Platform (Jetson / GPU) [GeForce RTX 3070 Ti] • DeepStream Version version 6.0.0 • JetPack Version (valid for Jetson only) • TensorRT Version TensorRT-8.2.1.8 • NVIDIA GPU Driver Version (valid for GPU only) CUDA Version: 11.5 • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
How to split the streams after the video streams are merged by nvmultistreamtiler???
Thank you for your answer
I understand what you mean, batch is inference by batch, but the fps is still not high enough, so I think one inference after merging the video streams may be better than batch.
The question is how to split the video stream after it is merged by nvmultistreamtiler? ? ?