I’m in the middle of a project that requires capturing images from a camera, processing it, and make decisions based on that.
Here is the architecture of the system:
In this scenario, we rely in a server + edge computing for decision making. But in my opinion, it is redundant to have the edge computer, instead we should do the processing in the server. Plus, we can have multiple cameras, and so, we will have multiple edge computers.
In my understanding, the edge computing is important in situations where the decision of the processing is done locally, to avoid latency between the server and the place of decision.
Since we rely in a server, the edge computer advantages don’t apply to our solution.
I’m asking if my reasoning is correct, or if the architecture that I showed in the image makes sense?