I have a problem about Deepstream in my application. Basically, I define 4 channels for my application to decode, inference and display four videos through network streams simultaneously. It works pretty well if four videos have the same frame rate.
However, the problem is both the inference and display module will wait for all video to finish decoding and pass all image to inference (with a batch size of 4). So I’m wondering if the inference task can be called individually after each decoding channel is complete without waiting for all channels to complete?
My use case is that different videos have different frame rates. So I don’t want video with lower frame rate delay the inference of video with higher frame rate.