Server vs edge paradigm

Hi there

I’m in the middle of a project that requires capturing images from a camera, processing it, and make decisions based on that.
Here is the architecture of the system:

In this scenario, we rely in a server + edge computing for decision making. But in my opinion, it is redundant to have the edge computer, instead we should do the processing in the server. Plus, we can have multiple cameras, and so, we will have multiple edge computers.
In my understanding, the edge computing is important in situations where the decision of the processing is done locally, to avoid latency between the server and the place of decision.
Since we rely in a server, the edge computer advantages don’t apply to our solution.
I’m asking if my reasoning is correct, or if the architecture that I showed in the image makes sense?

Thanks

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

The edge computer can process the video on remote side and generate messages to the server, thus the server can focus on the business logic instead of handling the decoding&image-processing&inference.
If the videos need to be handled in the server, the bandwidth to the server, latency, server’s ability to handle decoding&image-processing&inference all need to be considered.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.