Issue with 60fps AVI video

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 5.1
• TensorRT Version 7.2

Hi, I prepared program with DeepStream and while testing I found out that it doesn’t work properly with avi video with 60fps. The output video has 60 fps but it has only every second frame so the video is a half shorter and 2x times faster. I checked and pgie gets only every second frame so I guess the problem is with uridecodebin. Also when I test my application with mp4 video with 60fps everything works fine. In my application I run DeepStream with python script but I checked avi video with deepstream-app and in this version it also works correct, so maybe the problem is with uridecodebin in python. It is my code for uridecodebin:

def cb_newpad(decodebin, decoder_src_pad, data):
    caps = decoder_src_pad.get_current_caps()
    gststruct = caps.get_structure(0)
    gstname = gststruct.get_name()
    source_bin = data
    features = caps.get_features(0)
    # Need to check if the pad created by the decodebin is for video and not audio.
    if gstname.find("video") != -1:
        if features.contains("memory:NVMM"):
            # Get the source bin ghost pad
            bin_ghost_pad = source_bin.get_static_pad("src")
            if not bin_ghost_pad.set_target(decoder_src_pad):
                sys.stderr.write("Failed to link decoder src pad to source bin ghost pad\n")
        else:
            sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")

def decodebin_child_added(child_proxy, Object, name, user_data):
    if name.find("decodebin") != -1:
        Object.connect("child-added", decodebin_child_added, user_data)

def create_source_bin(index, uri):
    # Create a source GstBin to abstract this bin's content from the rest of the
    # pipeline
    bin_name = "source-bin-%02d" % index
    nbin = Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write(" Unable to create source bin \n")

    uri_decode_bin = Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")
    # We set the input uri to the source element
    uri_decode_bin.set_property("uri", uri)
    # Connect to the "pad-added" signal of the decodebin which generates a
    # callback once a new pad for raw data has beed created by the decodebin
    uri_decode_bin.connect("pad-added", cb_newpad, nbin)
    uri_decode_bin.connect("child-added", decodebin_child_added, nbin)

    Gst.Bin.add(nbin, uri_decode_bin)
    bin_pad = nbin.add_pad(Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC))
    if not bin_pad:
        sys.stderr.write(" Failed to add ghost pad in source bin \n")
        return None
    return nbin

I took this code from your github https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-test3/deepstream_test_3.py

This is my python script for DeepStream:test_deep.py (7.6 KB) It takes two arguments - path to input video and path to output video.

And this is my config file for deepstream-app: deepstream_app_config_yoloV4.txt (4.2 KB)

I hope you can help me with it

The problem may relate to the avi video you used. Please check the avi file.

But when I run DeepStream with deepstream-app with config file which I attached below everything works fine, so DeepStream can run with this video and it can be readed properly so with python script it should also work.

You can try the deepstream-test3 python sample with your avi file. If it can work, you may refer to this sample.

deepstream_python_apps/apps/deepstream-test3 at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

I already tried with this sample and it works the same - takes every second frame, so it looks like uridecodebin works different in deepstream-app than in python script.

@Fiona.Chen any update?