Hello, I am studying about “edge computing with microservices”.
While reading papers and various articles, I found that there are many attempts to deploy microservices in edge computing. However, it was difficult to find actual real-world use cases.
I have seen some cases where multiple containers (running microservices) run on a single edge device (e.g., NVIDIA Orin), but I haven’t found examples where multiple edge devices(each device with microservices) are interconnected, exchanging data and working together to perform a single application.
Autonomous driving, drones, smart healthcare, and smart factory are often mentioned, but I’m curious whether these examples truly using multiple edge devices (e.g., NVIDIA Orin), interconnected, etc.
I’ve read nvidia edge computing blog postings, but (I think) they are somewhat vague and lack concrete details.
Please let me know anything about this topic.
Thanks.
If you need processing of AI quickly on location, then it is a good case. An example is that some software might feed video to a home base, in which case the AI does not need to be in the edge device. In another case the edge device might need to do something with the data, such as a drone navigating; in that case you could technically send the entire video feed to the home base, and then process it, followed by sending commands back to the drone. By that time the drone has probably crashed or is flying “less than efficiently”. One reason navigation keeps showing up is that AI for stereo vision, LIDAR, depth finding, so on, is something people want the edge device itself (the drone is an example) to be able to respond without waiting to talk to a base.
Let’s also pretend that you have a lot of video and are doing some sort of ID with it. I know there are some forest service apps with cameras watching for smoke, and that the cameras are reasonably high resolution. They tend to use a cellular modem for communications due to remoteness. The cellular data is expensive, and also bandwidth limited. The camera isn’t moving, and so you could send the video feed over the cell network if you have an unlimited budget and don’t care about dropped frames, but if you compute at the detector itself, and then only send frames of “activity” of smoke, your cost will drop dramatically (and quality will go up).