Adding (blending) two images using Deepstream

I am currently working on developing (Python) a heatmap visualization in Deepstream and am seeking guidance on how to add (blend) two images directly within the pipeline to use Deepstream’s encoding functionality. After searching through the repository, I haven’t come across specific examples addressing this technique.

While I’ve found external solutions, such as cv2.imwrite and cv2.imshow, I believe that Deepstream is powerful enough to handle this task internally, without the need for external tools. However, I’m struggling to find examples or documentation demonstrating how to perform this operation directly within the Deepstream video processing pipeline.

If anyone has insights on incorporating the functionality of blending two images using Deepstream’s encoding, I would greatly appreciate any guidance or examples you can share. Thank you in advance for any assistance!

Could you provide complete information as applicable to your setup and your whole pipeline? Thanks
Hardware Platform (Jetson / GPU)
DeepStream Version
JetPack Version (valid for Jetson only)
TensorRT Version
NVIDIA GPU Driver Version (valid for GPU only)
Issue Type( questions, new requirements, bugs)
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform (Jetson / GPU)
DeepStream Version
JetPack Version (valid for Jetson only)
NVIDIA GPU Driver Version (valid for GPU only)
Issue Type( questions, new requirements, bugs)
questions/new requirements

Full code pipeline.

How to send this new_frame back to pipepline.
Using below code as example on this line new_frame = cv2.addWeighted(heatmap_img, 0.5, n_frame, 0.5, 0)

My challenge is figuring out how to send this modified new_frame back into the pipeline for further processing (use encoder). I’d appreciate any suggestions or alternative approaches to achieve this. The key goal is to seamlessly integrate this modified frame back into the existing pipeline.

def tiler_sink_pad_buffer_probe(pad, info, u_data):
frame_number = 0
    num_rects = 0
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))

    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
            # Note that needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(
        except StopIteration:

        frame_number = frame_meta.frame_num
        l_obj = frame_meta.obj_meta_list
        num_rects = frame_meta.num_obj_meta
        n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)

        # Apply Gaussian blur and draw heatmap
        global_img_np_array_norm = cv2.GaussianBlur(global_img_np_array_norm,(9,9), 0)
        heatmap_img = cv2.applyColorMap(global_img_np_array_norm, cv2.COLORMAP_JET)
        #Overlay heatmap on video frames
        new_frame = cv2.addWeighted(heatmap_img, 0.5, n_frame, 0.5, 0)

You can do this through the following plugin nvdsvideotemplate, but there is no python demo at the moment.
You can do whatever you want to do with the image in this plugin. You can refer to our C/C++ demo deepstream-emotion-app, we process the gstbuffer in the following code:process_buffer.

1 Like