Skip frames

• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 3090
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.113.01
• Issue Type(questions, new requirements, bugs) questions

Hello,

I have a question.

Interval specifies the number of consecutive, batches to be skipped for inference. Is there anything can I use to skip frames? I am not interested in all frames, I just want to process the pipeline on f1, f6, f11, …, fn.
For simplicity, If I have a video of 100 frames, I want the resultant video/frames to be 100/5 or something. This means the whole pipeline process/uses only 100/5 frames (If I am setting this property to 5).

Thank you.

You may consider videorate (gstreamer.freedesktop.org) after video decoder.

Is there an example in python that shows how can I use it?

It is a common open source GStreamer plugin. You can just follow the gst-python way. Python GStreamer Tutorial (brettviren.github.io)

I am kinda confused now. I was investigating and found an element called “nvurisrcbin” which can be used to do exactly what I need.

Since I am following deepstream_test3.py, I used it to achieve my objective. What is the difference?

This is what I did:

uri_decode_bin=Gst.ElementFactory.make("nvurisrcbin", "uri-decode-bin")
uri_decode_bin.set_property("drop-frame-interval",5)

Thank you.

This is the property of the hardware decoder. Gst-nvvideo4linux2 — DeepStream 6.3 Release documentation
It only works when your input source is H264 or H265 encoded streams.

So it wouldn’t work if I am using rtsp?

Okay, may you provide a small example of how I can use videorate in Python? I really can’t use it on my own. My pipeline keeps getting stuck!

Thanks.

It can work if the rtsp stream is encoded in H264 or H265 format.

The videorate is an open source GStreamer plugin. Please refer to videorate (gstreamer.freedesktop.org) for the usage.

For the stucking issue, please make sure there is no CPU or GPU loading issue first.

Okay, I could get videorate plug-in works too!

But, I want to know what is the best practice? When is it better to use videorate and when is it better to use nvurisrcbin? Here I am talking about performance, memory, and applicability.

Both of them did the same thing (As I found), but I am not sure if I am correct.

In my case, I’ll be using rtsp streams which is encoded in H264.

You are right, the two methods are similar if you put the videorate right after video decoder. The only advantage of videorate is that it can switch your streams to any framerate no matter what is your original stream framerate.

1 Like

That’s cool.

So, is this the right way to use videorate?

for i in range(number_sources):
        print("Creating source_bin ",i," \n ")
        uri_name=args[i]
        if uri_name.find("rtsp://") == 0 :
            is_live = True
        source_bin=create_source_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Unable to create source bin \n")
        pipeline.add(source_bin)

        video_convertor_name = "nvvidconv_%u"%i
        print("Creating nvvidconv ",i ," \n ")
        # Create nvvidcon.
        video_convertor_name = Gst.ElementFactory.make("nvvideoconvert", video_convertor_name)
        if not video_convertor_name:
            sys.stderr.write(" Unable to create nvvidconv",i," \n")
        # video_convertor_name.set_property("average-period", 40)
        pipeline.add(video_convertor_name)

        videorate_name = "videorate_%u"%i
        print("Creating videorate ",i ," \n ")
        # Create videorate.
        videorate_name = Gst.ElementFactory.make("videorate", videorate_name)
        if not videorate_name:
            sys.stderr.write(" Unable to create videorate \n")
        videorate_name.set_property("rate", 5)
        # videorate_name.set_property("drop",0)
        # print("Value:")
        # print(videorate_name.get_property("drop-only"))
        pipeline.add(videorate_name)

        source_bin.link(video_convertor_name)
        video_convertor_name.link(videorate_name)
        padname="sink_%u" %i
        sinkpad= streammux.get_request_pad(padname) 
        if not sinkpad:
            sys.stderr.write("Unable to create sink pad bin \n")
        srcpad=videorate_name.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin \n")
        srcpad.link(sinkpad)

Why do you add nvvideoconvert after source_bin?

I thought it is necessary. is it?

Should the videorate component be just after the source bins?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

If your source is H264/H265 streams, it is of no use.

It is better to add videorate right after decodebin. And please refer to videorate (gstreamer.freedesktop.org) to get the right usage of this plugin. It is a common GStreamer open source plugin, there are lots of resources.

deepstream_test_3.py (17.3 KB)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.