Hi everyone,
I’m working on a DeepStream pipeline where I want to detect and save cropped faces from video streams. I’ve added the following code at the tiler sink pad to crop and save the frames:
if bbx_face:
** try:**
** n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)**
** frame_image = np.array(n_frame, copy=True, order=‘C’)**
** frame_image = cv2.cvtColor(frame_image, cv2.COLOR_RGBA2BGR)**
** for frame, face_data in bbx_face.items():**
** left, top, width, height = face_data[‘bbx’]**
** cropped_face = frame_image[int(top):int(top+height), int(left):int(left+width)]**
** file_name = os.path.join(output_folder, f"face_{frame_number}_{frame}.jpg")**
** success = cv2.imwrite(file_name, cropped_face)**
** if success:**
** print(f"Successfully saved {file_name}")**
I’ve added nvvideoconvert
and filter1
as suggested in deepstream_python_apps/apps/deepstream-imagedata-multistream-redaction/
and linked the elements as follows:
Element linking
streammux.link(queue1)
queue1.link(pgie)
pgie.link(tracker)
tracker.link(nvanalytics)
nvanalytics.link(pgie1)
pgie1.link(sgie2)
sgie2.link(sgie3)
sgie3.link(sgie4)
sgie4.link(nvstreamdemux)
nvstreamdemux.link(nvvideoconvert)
nvvideoconvert.link(filter1)
However, I’m still encountering the error:
get_nvds_buf_Surface: Currently we only support RGBA color Format.
It seems like my linking for nvvideoconvert
might not be correct. Does anyone have any advice on how to properly link nvvideoconvert
to convert the format correctly?