Split stream by regions and batch inference

I am using Docker image from nvcr.io/nvidia/deepstream:5.0.1-20.09-devel

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 11.1
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, I am using deepstream 5.0 python version starting from the multi stream example. I want to split video frames in two, if resolution is 1000x1000 I would like to split stream in two of 1000x500 resolution and then use streammux to apply batch inference over them. In other words, split the input stream and batch it to paralelize inference.

My starting approach is the multi stream example. I can set multiple sources from the same camera and then use videocrop component. Using the multistream tiler i could obtain something quite acceptable as result. But I am wondering what should I do if I want to bring back original image after nvosd has injected bbox metadata.

2 Likes

What’s the pipeline you are trying?

What’s the meaning of original image? Referring your example, it’s the image of 1000x1000 or 1000x500

Well, at this moment I am trying the following starting from deepstream_python_apps/deepstream_test_3.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

In create_source_bin method I want to add a nvvideconverter element to crop video source so:

def create_source_bin(index,uri, crop_size):
    print("Creating source bin")

    # Create a source GstBin to abstract this bin's content from the rest of the
    # pipeline
    bin_name="source-bin-%02d" %index
    print(bin_name)
    nbin=Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write(" Unable to create source bin \n")


    uri_decode_bin=Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")
    uri_decode_bin.set_property("uri",uri)

    uri_decode_bin.connect("pad-added",cb_newpad,nbin)
    uri_decode_bin.connect("child-added",decodebin_child_added,nbin)

    Gst.Bin.add(nbin,uri_decode_bin)

    nvvideoconvert=Gst.ElementFactory.make("nvvideoconvert", "nvvideoconvert")
    nvvideoconvert.set_property("src-crop", crop_size)

    bin_pad=nbin.add_pad(Gst.GhostPad.new_no_target("src",Gst.PadDirection.SRC))
    if not bin_pad:
        sys.stderr.write(" Failed to add ghost pad in source bin \n")
        return None
    return nbin

I don’t really know how set the ghost pad target to the nvvideoconvert src pad and link the decoder src pad to the nvideoconvert. So this is my first problem.

Regarding your second question, once inference is done I would like to bring back the 1000x1000 image back.

I have fixed the first problem with a dirty solution, linking bin src pad and nvvideoconvert sink pad. So nvvideoconver is not inside bin anymore. I guess the ideal solution wourld be to create and intermediate ghost pad and link that to the nvvideoconvert but that’s what I cannot figure out

self.bin.get_static_pad(“src”).link(nvvideoconvert.get_static_pad(“sink”))