Some question about Deep stream 5

Thanks, so good.
My last question is that:
I run the one sample of deepstream-app for 8 rtsp streams, and show this results:

Q1- Average FPS for each stream is about 27, that’s mean the nano process each stream with 27 fps at the same time?
Q2- It runs one model for all streams, It captures one batch (included 8 frames of all streams) and then inference on one batch? If so, do process one batch at the same time or do loop for each frame of batch?

Q1- Average FPS for each stream is about 27, that’s mean the nano process each stream with 27 fps at the same time? ===> yes
Q2- It runs one model for all streams, It captures one batch (included 8 frames of all streams) and then inference on one batch? If so, do process one batch at the same time or do loop for each frame of batch? ==> one inference shot processes one batch(8 frames) at the same time.

Thanks for your quick answers.
Q1- Is is possible to run custom trained model like face recognition with deep stream 5.0 DP?
Q2- I run this sample, and I want to don’t show me tiled window, I want to do process in background with showing anything, How do O do?

Q1- Is is possible to run custom trained model like face recognition with deep stream 5.0 DP?

Yes, DeepStream wraps TensorRT and Triton, so any model that can run with TensorRT or Triton can run on DeepStream. Normally, using TensorRT can get better performance.

I want to do process in background with showing anything

I don’t get your point, could you clarify?

I run this code, I want to use only decoded frames to do for processing, but this code also shows visualization, but I want to disable visualization.

you can change

sink = Gst.ElementFactory.make(“nveglglessink”, “nvvideo-renderer”)

to

sink = Gst.ElementFactory.make(“fakesink”, “nvvideo-no-renderer”)

Thanks a lot.
Q1 - If I want to use the decoded frames in the custom python app, How do I do?
In this function of deepstream-imagedata-multistream, the decoded frames can be captured with frame_image and I want to put the frames into queue and use queue in the processing units. and I also modified this function. But when I put the queue part of codes in this function, running this decoder is broke. why this happen? because I can’t use the queue in this part?, if so How I capture the frames of streams and use them in the processing units?

 def tiler_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    num_rects=0
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list

    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num

        n_frame=pyds.get_nvds_buf_surface(hash(gst_buffer),frame_meta.batch_id)

        frame_image=np.array(n_frame,copy=True,order='C')

        frame_image=cv2.cvtColor(frame_image,cv2.COLOR_RGBA2RGB)
        
       if not q.full():
            q.put(frame_image)
       else:
            q.get(frame_image)

        fps_streams["stream{0}".format(frame_meta.pad_index)].get_fps()
            
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
    return Gst.PadProbeReturn.OK

Q2 - Because my input frame rate in high than output frame rate. i.e input rate = 25 FPS and outptu rate = 5 fps, because of I want to capture every 5th frame, and nvv4l2decoder has the drop-frame-interval options, I want to know this sample code used nvv4l2decoder, or uridecodebin ? How I can use drop-frame-interval option in this sample code?