**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version5.1
• TensorRT Version7.2.3
**• NVIDIA GPU Driver Version (valid for GPU only)**460.80
**• Issue Type( questions, new requirements, bugs)**questions
Hello, I wanna understand to inference question of multi source.
If I have a model, how can I inference multi source ?
I know I can add source config , I already used deepstram-app.
Therefore, I can add different postprocess rule?
I mean that if one of camera sources detects some objects, then I can know which source detect which objects?
I wanna to send command to it, bowever ,other source still don’t been effected.
Because I will deploy deepstream solution above thousands of customer sources. Please tell me how to solve that, it’s so important. Thank you.