Add file input at runtime with Kafka


I built a cloud video analytics platform using python, tensorflow, opencv and kafka. Each perception service is modeled as a microservice with a tensorflow model inside. The microservices communicate with kafka. The architecture is really simple: after the file upload (using a web application) the video analysis is triggered sending a kafka message to all perception services with the location of the video (path to the video file). Then each microservice send back the results (using kakfa) for further processing.
Unfortunately I had poor performances, for this reason I ran into DeepStream: this framework is awesome and I want to implement all my microservices using this technology.
The obstacle that I have at the moment is adding dynamic input sources during the execution of a deepstream application, I want to store the video file and then send the location (maybe an uri) to the deepstream service using Kafka to start the analysis. I know that Nvidia provides a plugin for sending metadata with kafka but what about the inputs? Do you think that is possible?
Sorry for my English and my poor explanation. Any advice will be appreciated, thanks in advance.


Tensorflow doesn’t show good performance since it doesn’t optimized for the Jetson system.

Currently, we don’t support bidirectional kafka message suppport.
This is a feature we want to enable in our future release.

For now, Deepstream support following input source type:

1: Camera (V4L2)
2: URI
3: MultiURI
5: Camera (CSI) (Jetson only)

Do you think it is possible to implement your requirement with RTSP source?

Thank you very much for you answer.
The problem is that I store video files in an object storage service like S3.
How can I trigger video analysis each time a new video is added? The deepstream application should wait for new uploads and start the inference on the new video. Is there a standard mechanism for doing this? is there a plugin to do this?
Thanks again


Currently, we don’t have a mechanism for the use case like this.
But it’s possible to implement a customized version with our plugin API.