Doesn't work nvv4l2decoder in deep-stream-python-apps

• Hardware Platform : Jetson
• DeepStream Version : 5.0 DP
• JetPack Version (valid for Jetson only) 4.4 DP
• TensorRT Version : 7.x

I want to use deep-stream python app, in this sample use decodebin for decoding and don’t have drop-frame-interval option like nvv4l2decoder, I want to change type of decoder to nvv4l2decoder, How do I do?

Hi,
Please refer to the patch:


You would need to apply the C patch to the python code.

I modified in python.
but in the python source we have this part :

for i in range(number_sources):
        os.mkdir(folder_name+"/stream_"+str(i))
        frame_count["stream_"+str(i)]=0
        saved_count["stream_"+str(i)]=0
        print("Creating source_bin ",i," \n ")
        uri_name=args[i+1]
        if uri_name.find("rtsp://") == 0 :
            is_live = True
        source_bin=create_source_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Unable to create source bin \n")
        pipeline.add(source_bin)
        padname="sink_%u" %i
        sinkpad= streammux.get_request_pad(padname) 
        if not sinkpad:
            sys.stderr.write("Unable to create sink pad bin \n")
        srcpad=source_bin.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin \n")
        srcpad.link(sinkpad)

def create_source_bin(index,uri):
    print("Creating source bin")

    # Create a source GstBin to abstract this bin's content from the rest of the
    # pipeline
    bin_name="source-bin-%02d" %index
    print(bin_name)
    nbin=Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write(" Unable to create source bin \n")

    # Source element for reading from the uri.
    # We will use decodebin and let it figure out the container format of the
    # stream and the codec and plug the appropriate demux and decode plugins.
    uri_decode_bin=Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")
    # We set the input uri to the source element
    uri_decode_bin.set_property("uri",uri)
    # Connect to the "pad-added" signal of the decodebin which generates a
    # callback once a new pad for raw data has beed created by the decodebin
    uri_decode_bin.connect("pad-added",cb_newpad,nbin)
    uri_decode_bin.connect("child-added",decodebin_child_added,nbin)

    # We need to create a ghost pad for the source bin which will act as a proxy
    # for the video decoder src pad. The ghost pad will not have a target right
    # now. Once the decode bin creates the video decoder and generates the
    # cb_newpad callback, we will set the ghost pad target to the video decoder
    # src pad.
    Gst.Bin.add(nbin,uri_decode_bin)
    bin_pad=nbin.add_pad(Gst.GhostPad.new_no_target("src",Gst.PadDirection.SRC))
    if not bin_pad:
        sys.stderr.write(" Failed to add ghost pad in source bin \n")
        return None
    return nbin

def decodebin_child_added(child_proxy,Object,name,user_data):
    print("Decodebin child added:", name, "\n")
    if(name.find("decodebin") != -1):
        Object.connect("child-added",decodebin_child_added,user_data)   
    if(is_aarch64() and name.find("nvv4l2decoder") != -1):
        print("Seting bufapi_version\n")
        Object.set_property("bufapi-version",True)
        Object.set_property("drop-frame-interval", 5)

But this use decodebin, not used nvv4l2decoder.

Hi,
It should pick nvv4l2decoder in decodebin. You may set the environment variable:

$ export GST_DEBUG=*FACTORY*:4

to list all picked plugins.

Is it possible to use the above python code for decoding the multi-stream in my own python app? If so, How I can pass the decoded frames into my app?

Hi,
The sample supports multi-stream. Please check README:

$ python3 deepstream_imagedata-multistream.py <uri1> [uri2] ... [uriN] <FOLDER NAME TO SAVE FRAMES>

The samples are open source and with copyright notice. You can do customization by following the notice.

The suggested command work for showing multi-stream decoder, but I want to use the output of decoded frames in my python app, It is not efficient to write the decoded frames in disk with the above command and then I read these decoded frames from this and use in my app, right?

Hi,
Since the samples are open source for reference, you may customize it to fit your usecase. We have implementations in Sink Group. It is easier and faster if you can apply your usecase to the existing implementation.

I changed this part of code to push data into queue but the system became slow, why?
I printed the shape of frame_image, It’s ok every time show me shape of that array image, but I don’t know why I can’t push the frame_image into queue , the system bcame ok, Is it right and efficient to put queue in the tiler_sink_pad_buffer_probe?

def tiler_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    num_rects=0
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return
        
    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    #c = 0
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        #l_obj=frame_meta.obj_meta_list
        #num_rects = frame_meta.num_obj_meta
        #is_first_obj = True
        save_image = True
        obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
        }
        #while l_obj is not None:
            #try: 
                # Casting l_obj.data to pyds.NvDsObjectMeta
                #obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            #except StopIteration:
                #break
            #obj_counter[obj_meta.class_id] += 1
            # Periodically check for objects with borderline confidence value that may be false positive detections.
            # If such detections are found, annoate the frame with bboxes and confidence value.
            # Save the annotated frame to file.
            #if((saved_count["stream_"+str(frame_meta.pad_index)]%30==0) and (obj_meta.confidence>0.3 and obj_meta.confidence<0.31)):
                #if is_first_obj:
                    #is_first_obj = False
                    # Getting Image data using nvbufsurface
                    # the input should be address of buffer and batch_id
        n_frame=pyds.get_nvds_buf_surface(hash(gst_buffer),frame_meta.batch_id)
        print('frame_meta.batch_id: ', frame_meta.batch_id)
                    #convert python array into numy array format.
        frame_image=np.array(n_frame,copy=True,order='C')
                    #covert the array into cv2 default color format
        frame_image=cv2.cvtColor(frame_image,cv2.COLOR_RGBA2RGB)
        #if saved_count["stream_"+str(0)]%2==0 and frame_meta.batch_id==0:
            #cv2.imwrite('test{}.jpg'.format(time()), frame_image)
            #cv2.imshow('test', frame_image)
            #cv2.waitKey(1)
        if not q.full():
            q.put(frame_image)
         else:
        q.get(frame_image)
        print('frame_image: ', frame_image.shape)
        #save_image = True
        #frame_image=draw_bounding_boxes(frame_image,obj_meta,obj_meta.confidence)
           # try: 
                #l_obj=l_obj.next
            #except StopIteration:
                #break

        #print("Frame Number=", frame_number, "Number of Objects=",num_rects,"Vehicle_count=",obj_counter[PGIE_CLASS_ID_VEHICLE],"Person_count=",obj_counter[PGIE_CLASS_ID_PERSON])
        # Get frame rate through this probe
        fps_streams["stream{0}".format(frame_meta.pad_index)].get_fps()
        #if save_image:
            #cv2.imwrite(folder_name+"/stream_"+str(frame_meta.pad_index)+"/frame_"+str(frame_number)+".jpg",frame_image)
        saved_count["stream_"+str(frame_meta.pad_index)]+=1        
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
        #sleep(1/20)
    return Gst.PadProbeReturn.OK

Hi,
Too many operations in prob function may slow down the whole pipeline. You may check if you can use gstreamer plugins tee and queue to implement the usecase.

Hi,
I’m new in this work, If possible a more little explain this ? What’s tee? This is used for creating branching from workflow of pipeline?

Hi,
tee is native gstreamer plugin. Please check
https://gstreamer.freedesktop.org/documentation/coreelements/tee.html?gi-language=c

There is code of using tee in deepstream-test4. FYR.