I built a cloud video analytics platform using python, tensorflow, opencv and kafka. Each perception service is modeled as a microservice with a tensorflow model inside. The microservices communicate with kafka. The architecture is really simple: after the file upload (using a web application) the video analysis is triggered sending a kafka message to all perception services with the location of the video (path to the video file). Then each microservice send back the results (using kakfa) for further processing.
Unfortunately I had poor performances, for this reason I ran into DeepStream: this framework is awesome and I want to implement all my microservices using this technology.
The obstacle that I have at the moment is adding dynamic input sources during the execution of a deepstream application, I want to store the video file and then send the location (maybe an uri) to the deepstream service using Kafka to start the analysis. I know that Nvidia provides a plugin for sending metadata with kafka but what about the inputs? Do you think that is possible?
Sorry for my English and my poor explanation. Any advice will be appreciated, thanks in advance.