I have an AI application running in the GPU and the input are from multi IP cameras, but I want to configure the deepstream to support different models for different cameras.
Such as camera1/2 are using the YOLOV4 to detecting the cars, while the camera2/3 are using the tiny-YOLO to detect the persons.
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)