I want to save frames after inference, it doesnt work despite adding nvvideoconvert and capsfilter

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have a deepstream app, that has multiple sgie’s. I have added probe on nvosd, where i check that if a vehicle is of Truck type, i want to save the current frame. This is my code, it doesnt work

obj_meta_list = frame_meta.obj_meta_list
while obj_meta_list:
try:
obj_meta = pyds.NvDsObjectMeta.cast(obj_meta_list.data)
detection = obj_meta.class_id
vehicleType = class_id_to_vehicle_type.get(detection, “Unknown”)

            if vehicleType == "Truck":  # Save frame for Truck if violation detected
                save_violation_frame(frame_image, obj_data["tracking_id"])

def save_violation_frame(frame, tracking_id):

filename = os.path.join(violation_folder, f"violation_{tracking_id}.jpg")
cv2.imwrite(filename, frame)
logging.info(f"Violation frame saved as {filename}")

def get_frame_image(batch_meta, frame_meta):

n_frame = pyds.get_nvds_buf_surface(hash(batch_meta), frame_meta.batch_id)
    frame_image = cv2.cvtColor(n_frame, cv2.COLOR_RGBA2BGR)
return frame_image

I tried hashing from gstbuffer

def get_frame_image(gst_buffer, frame_meta):

n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)

frame_image = np.array(n_frame, copy=True, order='C')

if frame_image.shape[2] == 4:  # RGBA format
    frame_image = cv2.cvtColor(frame_image, cv2.COLOR_RGBA2BGR)
elif frame_image.shape[2] == 3:  # RGB format
    frame_image = cv2.cvtColor(frame_image, cv2.COLOR_RGB2BGR)

I get this error:

Traceback (most recent call last):
File “/home/a2i/Downloads/deep_app_stable/deepstream_lpr_app-master/deepstream-lpr-app/myapp.py”, line 556, in tiler_src_pad_buffer_probe
frame_image = get_frame_image(gst_buffer, frame_meta)
File “/home/a2i/Downloads/deep_app_stable/deepstream_lpr_app-master/deepstream-lpr-app/myapp.py”, line 347, in get_frame_image
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
RuntimeError: get_nvds_buf_Surface: Currently we only support RGBA/RGB color Format

any help from anyone would be appreciated

You need to add a nvvideoconvert to change the color format to RGBA. Please refer to our demo code deepstream_imagedata-multistream.py.

I have added this already

Video convert for this stream

    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", f"nvvidconv_{i}")
    if not nvvidconv:
        sys.stderr.write(f" Unable to create nvvidconv_{i}\n")
    pipeline.add(nvvidconv)
    elements['nvvidconv'] = nvvidconv

and here how i link:

link the complete chain for this stream

        demuxsrcpad.link(queuesinkpad)
        elements['queue'].link(elements['nvvidconv'])
        elements['nvvidconv'].link(elements['nvosd'])
        elements['nvosd'].link(elements['capsfilter'])

whats wrong? can somebody guide?

You can read the demo code that I attached before.

...pgie->nvvidconv1->filter1...

The nvvidconv1 plugin and filter1 plugin can change the color format of the image to RGBA.

Thankyou for your reply, but when i try to insert a filter in my pipeline i get a quark not negotiated -4 error

i tried the sample app and the frames get saved perfectly, but not with my custom pipeline

link the complete chain for this stream

        demuxsrcpad.link(queuesinkpad)
        elements['queue'].link(elements['nvvidconv'])
        elements['nvvidconv'].link(elements['capsfilter_rgba'])
        elements['capsfilter_rgba'].link(elements['nvosd'])
        elements['nvosd'].link(elements['capsfilter'])
        elements['capsfilter'].link(elements['encoder'])
        elements['encoder'].link(elements['h264parser'])
        elements['h264parser'].link(elements['muxer'])
        elements['muxer'].link(elements['queue_hls'])
        elements['queue_hls'].link(elements['hlssink'])

Please refer to our demo source code I attached before. You need to set caps to the capsfilter_rgba plugin first.

Since our sample app can work normally. Please take a brief look at the source code first, and then make your own customization.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.