Re-Inference Interval when using nvdspreprocess before SGIE

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Laptop RTX 4070
• DeepStream Version: 7.1
• TensorRT Version: 10.3 using ngc deepstream container
• NVIDIA GPU Driver Version (valid for GPU only): 570
• Issue Type( questions, new requirements, bugs): Question

Is there a way to reduce the reinfer-interval per object when using nvdspreprocess ?

I am using the following pipeline structure:

urisrc → nvstreammux(new) → nvinfer → nvtracker → nvdspreprocess → nvinfer(input-tensor-from-meta=1) → tiler → nvvidconv → nvosd → nveglglessink

So far it works as expected, the only problem is, my second nvinfer is running on each object in every frame, and since this is an instance segmentation model it is slowing down the pipeline a lot when many objects are in the view (avg. 40). Now i am looking for possiblities to reduce number of inference per object, since i also don’t need every frame but about every 5th frame of each object. The best would be if load on the second model was evenly distributed accross frames.
E.G. 25 Fps on 40 Objects means 1000 Frames per second in second model, obviously this wont be possible with a segmentation model.

Currently our nvdspreprocess does not support this feature. Since both nvdspreprocess and nvinfer are open source, and you can modify nvdspreprocess yourself by following the code in nvinfer.

gstnvinfer.cpp
static GstFlowReturn
gst_nvinfer_process_full_frame (GstNvInfer * nvinfer, GstBuffer * inbuf,
    NvBufSurface * in_surf)
{
...
  /* Process batch only when interval_counter is 0. */
  skip_batch = (nvinfer->interval_counter++ % (nvinfer->interval + 1) > 0);
...