I’m creating an alert system based on Deepstream using the metadata generated by Gst-nvmsgconv and sending it to Apache kafka, this based on the example:
I’m currently running Kafka on the Jetson Nano with a container, my question is if I would like to scale this system using more nano modules (inference server), do you recommend having the Kafka server inside one and the rest connect to it, or use a Cloud provider for that service?
As additional information, I want to use an external server or service to set up the alarm system from kafka to a messaging or email service. Do you have any recommendations for this case?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)