Interval property of deepstream pipeline not working as intended

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) RTX 3090
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5
• NVIDIA GPU Driver Version (valid for GPU only) 535.98
• Issue Type( questions, new requirements, bugs) questions & bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi there, I have a deepstream pipeline in python that has 2 secondary plugins, what I’m trying to do is make the whole pipeline run once every X frames instead of running it on all frames. I learned that I need to use the interval property for the primary gie to achieve this but for some reason it’s not working as intended, I tried both available methods for setting this property:
1- set the interval property in the code itself using:

    print("Creating Pgie \n ")
    if requested_pgie != None and (requested_pgie == 'nvinferserver' or requested_pgie == 'nvinferserver-grpc') :
        pgie = Gst.ElementFactory.make("nvinferserver", "primary-inference")
    elif requested_pgie != None and requested_pgie == 'nvinfer':
        pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    else:
        pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")

    if not pgie:
        sys.stderr.write(" Unable to create pgie :  %s\n" % requested_pgie)

    pgie.set_property("interval",5) 

2- setting it inside the pgie config file in here

input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  operate_on_gie_id: -1
  interval: 5
}

both methods give out the same result, the interval as a variable is set and I even checked for frame processing using this variable in the FrameMeta class bInferDone – int, Boolean indicating whether inference is performed on given frame. and it does give true at expected frames only, however when I check the processed frames using a simple print inside the tiler fn for example it prints out processing of all frames, and the sink dumps out all frames from original video and each frame is processed using the pgie and the 2 sgies, is there something buggy about this or am I doing something wrong??

Did you run that with our demo app or your own app? Could you reproduce this problem with our demo app?

okay, so as advised I tried testing this on the default deepstream-test1.py app, there are several important points:
1- the interval does indeed seem to work as the py_nvosd_text_params.display_text displays only the non-skipped frames when I set the interval to a certain value, also the output video only has labels on those frames and the rest of the frames are added but not labeled.

2- for some reason the execution time of the whole application and pipeline is not impacted at all, I tried 3 trials with a video that totals of 1440 frames at intervals of 0 (no skipping), 5, 100 (total 15 frames instead), all runs gave me same execution time around 48 seconds, this is not what I expected at all since if I understand correctly the whole point of interval property is to skip the processing of most of the frames and hence decrease the whole execution time and allowing for building higher fps pipelines, or does just skip the post-processing like printing results and writing outputs but still does all of the actual processing like detection or tracking?

3- I actually ran the same video on another pipeline that adds two other models in addition to the detector which are reid tracker and age & gender model, and strange enough it takes the same 48 seconds, I understand that each stage in the pipeline runs parallelly to minimize overall execution time but it seems odd that it takes the same time to run as a single stage.

Is the duration of the video you are using 48 seconds? If you want to test the performance, you need to set the sink plugin to fakesink.

sink = Gst.ElementFactory.make("fakesink", "fakesink")

okay so indeed the duration of the video is 48 seconds, and when I used the fakesink the time dropped to 1.4 seconds so it did drop drastically. but here’s the thing, I then tried using a longer video (54k frames) using the fakesink, I ran it twice once with interval 0 and once with 1000, both trials took 73 seconds, so we’re back to the interval value unaffecting the execution time.
2 more things:
1- why or how does the video duration co-relate to the execution time when using the sink? the fact that the duration of the video is the time of execution seems odd to me
2- I also wanted to check with you if the method I use to calculate the time is in fact correct, ideally I should use the latency NVDS_LATENCY_MEASUREMENT_META to retrieve NvDsFrameLatencyInfo but since it’s not currently added in the python version, I used a simple time difference method to estimate exec time

        time_st = time.time()
        loop.run()
        time_end = time.time()
        print("frame time = ",time_end-time_st)

keep in mind that even if this method is wrong but still in reality it does take the same time to run with any interval value, which negates the whole point to use interval and cut down processing time

Because if you use the display sink, the final display will be synchronized according to the timestamp of the frame.

  1. We’ll add the latency test for python in the next version. Your method can only test the time of the pipeline, not the time of each plugin.

3.If you set the interval and it takes effect, but the processing time remains unchanged. This indicates that the bottleneck is not the nvinfer plugin, but possibly other plugins.

1- okay got it, I don’t actually need it at this stage so I’ll use the fakesink
2- yeah I actually know that from another ticket, just wanted to make sure that my method for the time being is working.
3- that’s the point I was talking about here:

I ran it twice once with interval 0 and once with 1000, both trials took 73 seconds, so we’re back to the interval value unaffecting the execution time.

I did this using the default test1 app that only runs the nvinfer without any additional plugins, so I’m not sure what the problem is but the problem seems to me a bug with the interval itself.

For test1, there are many plugins for different functions. The other plugins, like decoder, nvstreammux, nvvideoconvert, they all consume time to process data.

okay, then in this case how can I get it to achieve what I’m trying to and be able to halt all the plugins not just the nvinfer?
or is there any other way to do this, to skip some frames entirely and effectively decrease execution time accordingly?

I had come across the drop-frame-interval property but I’m not sure if and how to use it to achieve this

The pipeline always has an upper limit for processing video frames, it may be limited by your hardware device. Let’s just talk about deepstream-test1.py that you tested, what performance do you want to achieve?

okay let me clarify, my goal is if I have a video that has 54k frames and takes 73 seconds to process through deepstream-test1.py to process all frames, that instead I get the code to process only 1 frame each 10 frames for example and other frames are dropped entirely ( not passing through any of the plugins) so that the processing time drops to approximately ~ 7.3 seconds ( 73/10) which is the interval I skip frames by,
I know that there will be some overhead just for running the process that this ratio wouldn’t be exact but I think it should at least be less than or equal to 10 seconds maximum.

OK. You can have multiple methods to drop frames.

  1. You can use the skip-frames or drop-frame-interval in our decoder plugin.
  2. You can add a videorate plugin after the source to control the fps.
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.