Does DeepStream run the the pad buffer probe functions in the sample apps in a multi - threaded manner by default?
Unfortunately, it is not. It is just sample for showing how to get NVMetadata and how to handle NvMetadata.
I timed one of one of those pad buffer probe functions(modified) and it runs at around 10 fps, but my display sink shows real time framerates. Is display sink buffering a few frames in order to hit 30 fps? I did notice a slight latency using webcam src.
As to the pipeline or the plugins, they work in asynchronized way. As to the pad probe function itself, it is synchronized in our sample codes. Yes, there are bufferring mechanism for many display sinks.
Shouldn’t the buffer run out at some point like in this diagram? For example if a plugin in the pipeline is bottlenecked at 15 fps but the camera is pushing frames at 30 while the display sink shows 30 fps with a certain amount of lag.
Are there two instances of the same slow plugin running in parallel order to keep up(15 FPS + 15 FPS)?
What does your bottleneck mean? How do you limit the plugin to handle 15fps? With measuring the timestamp and dropping or only because the hardware can not handle more than 15fps?
The bottleneck will cause frame dropping for your camera case if the bottleneck is caused by the hardware performance reason. If the 15fps is controlled by the plugin, the other frames dropped by the plugin itself.
If the hardware can only handle 15fps, I don’t understand how can you implement (15fps + 15fps) .
If the bottleneck is not hardware limitation, the plugin who keep 15fps speed knows where the other frames go.
Bottleneck as in the plugin takes around 90-60 milliseconds. But my output looks smooth albeit with a noticeable delay. I haven’t tested it long enough to measure if the delay keeps increasing or not.