Processing time is greater than frame gap in python backend based triton inference server

This is my configuration

• x86 Machine with dGPU

• DeepStream Version : deepstream-6.3

• TensorRT Version: 8.6.1.

• NVIDIA GPU Driver Version (valid for GPU only) : Driver Version: 535.86.10 CUDA Version: 12.2

• Issue Type: questions


We use nv infer server and python backend based triton inference server to do the complex algorithm to do the inference on the frame batch. But whenever we do the processing, we see stuttering of the output video, and this is happening whenever our processing time is little bit larger than expected frame gap. ( for eg, 40ms for 25 fps or 33.3 for 30fps video). We have a sophisticated algorithm which is expensive that it need more than 40 ms to do the processing of a single batch of frames. Is there any way to get smooth output videos even if processing time is more than allowed frame gap for 25fps video.
we tried different interval params such as

input_control {
  interval: 3

We need atleast interval as 3 to get atleast ~10 fps analytics, but stuttering still exist. Please advice to solve the issue.

1 Like

I am unsure what you mean but by any chance if you want to set your FPS to 10, one of the ways to do is by following the method mentioned here.

My processing of frames takes long time to give overlay, essentially causing 4/5 fps in the output video, i need that as 25fps. interval doesnt help because it also causes stuttering of the video.