How to print the data sent to face recognition

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) TX2
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

For the secondary inference plug-in connected to the detection, how to know that the incoming data is correct, such as face recognition, how to know that the transmitted data is a cropped face and resize

It is desirable to have a way of printing the data that is passed to the next stage

Can you try to add one probe function and print the meta data you needed?

thank you for your reply.
yes,ofcouse I can do it .

I have a few questions,please answer them ,thanks.

  1. After object detection,the bbox information is saved in buffer,

the image data transferred to the next stage is the whole image information ,not the image after cutting according to the detection coordinates .

for tracking,it crops the whole image data according to the detection information?

so for the new plugin such as face recognition,how to crop the image ?

Is it cutting images while performing face recognition itself, or deepStream’s internal implementation

  1. In the following way, the image I get is the whole original image.

    How to determine the image processed by SGIE1, that is, determine whether the image processed is original or cropped

    tiler_sink_pad = tiler.get_static_pad("sink")
    if not tiler_sink_pad:
        sys.stderr.write(" Unable to get src pad \n")
        tiler_sink_pad.add_probe(Gst.PadProbeType.BUFFER, tiler_sink_pad_buffer_probe, 0)
    def save_pic(gst_buffer,frame_meta,pic_name):
    	n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)    
    	# convert python array into numpy array format in the copy mode.
    	frame_copy = np.array(n_frame, copy=True, order='C')
    	# convert the array into cv2 default color format
    	frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGRA)
    	img_path =  pic_name+ "__1.jpg"
    	print(" img_path   ",img_path)
    	cv2.imwrite(img_path, frame_copy)	
    def tiler_sink_pad_buffer_probe(pad, info, u_data):
    	frame_number = 0
    	num_rects = 0
    	gst_buffer = info.get_buffer()
    	if not gst_buffer:
    		print("Unable to get GstBuffer ")
    	batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    	l_frame = batch_meta.frame_meta_list
    	while l_frame is not None:
    			frame_meta = pyds.NvDsFrameMeta.cast(
    		except StopIteration:
    		frame_number = frame_meta.frame_num
    		save_pic(gst_buffer,frame_meta,"tiler_sink_pad_buffer_probe _"+str(frame_number))

Can you refer: deepstream-imagedata-multistream

Yes, I have referred to that part of code.
I can get the image data, but I get the raw image data, not the data I need for the second stage of processing .
Let’s say I want to get the clipped image data, and I want to know how is the clipped image data transmitted
How are the results of processing passed to the next stage ?
Please reply to the questions I mentioned above. Thank you

BBox information is in meta data. Downstream only need meta data.

If the plug-in is connected after object detection, whether the plug-in is officially implemented,
such as track or self-implemented, such as face recognition,

deepstream will crop the image data according to BBox information in meta data
and will send the cropped image data to do tracking or face recognition?

we donot need to crop the image data ,am I right ?

It depend on if you have SGIE in pipeline.

Can you explain it in detail? I’m not very familiar with Deepstream ?