Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson • DeepStream Version 5.1 • JetPack Version (valid for Jetson only) 4.5.1 • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi,
I have one primary model and 8 other models running on deep-stream, the fps very low. Currently i am working on the optimization to increase the speed.
the primary model has two output (A and B )
five of the secondary models work on A output from primary
and the rest work on B output.
Currently they are working on sequential, how to guarantee the secondary models are working on parallel?
does tee or queue help us on this?
Have you tried setting operate-on-class-ids parameter?
I think that should be able to do what you’re trying. But only difference is that it is not parallel like your diagram.
Say for secondary_model5, secondary_model6, secondary_model7 if you set operate-on-class-ids=2 and secondary_model1, secondary_model2, secondary_model3, secondary_model4 have operate-on-class-ids=1 then I think you should have the desired behaviour.