I have a AI pipeline where 2 ML Models are used. Output of first model acts as input to second model.
I would like to know if this can be supported in Deep Stream ? Can I put both models in Deep Stream on Jetson Device and the system automatically take care of both models ?
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)