How to display dynamically updated image using nv3dsink instead of streaming the input video

I am having troubles to change the output display n_frame information at will.

For example, I am working on face recognition task. Once I recognize the face, I want to show only the recognized face on the display at a position (x, y) with some text information related to the recognized person. I have all such information in user_meta and global variable now.

What plugins I should use to have access/modify on the output frame? any references I can check on how to do that?

Thank you,

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.1
• TensorRT Version 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

What do you want to modify? The NvDsMeta?

You can use GStreamer plugin pad probe to get the NvDsMeta from the GstBuffer and modify the content. The downstream elements will get the modified metadata.

Most of our DeepStream sample apps have pad probe functions.

How to modify the NvDSMeta to change the display frame? Can you provide an example?

I have a numpy array size 600x1024 (display_im) whereas nv3dsink image resolution is 1920x1080. I want to replace nv3dsink image with display_im.

What do you want to change? The bboxes in the video? The frame resolution?

What is the numpy array? If it is an image, what is the format? RGB or YUV?

In my pipeline, I have face detection and recognition. Once the face recognition done, it provides the name of the person and I get the face from my own database based on the name of the person. With this information, I created an image (numpy array) by pasting the face at location x, y and name of the person using opencv library.

Now, I want that image to be shown as display instead of input video with bounding boxes.

What is the numpy array? If it is an image, what is the format? RGB or YUV?
Its a RGB image created using opencv. Dtype: uint8.

Currently DeepStream does not support image paste(image osd). If you know the image format, you may try to implement the image osd function with nvdsosd plugin(the plugin is open source).

OK. Is it possible to rescale the image resolution from 1920x1080 → 600x1024 and modify by accessing get_nvds_buf_surface variable ?

I am experimenting to do in this way,

nvvidconv --> videoscale --> caps_filters --> nvosd --> sink

I am trying to rescale by adding videoscale and capsfilter_videoscale the output of an nvvidconv element, but it hangs always.

        # Use convertor to convert from NV12 to RGBA as required by nvosd
        nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
        if not nvvidconv:
            sys.stderr.write(" Unable to create nvvidconv \n")
        nvvidconv.set_property("gpu_id", self.opts.GPU_ID)
        self.pipeline.add(nvvidconv)

        # creating and setting the videoscale and capsfilter elements to resize the video
        videoscale = Gst.ElementFactory.make("videoscale", "myscale")
        self.pipeline.add(videoscale)

        capsfilter_videoscale = Gst.ElementFactory.make("capsfilter", "mycapsfilter")
        new_caps = Gst.Caps.from_string(
            f"video/x-raw,width={self.display_im.shape[1]},height={self.display_im.shape[0]}")
        capsfilter_videoscale.set_property("caps", new_caps)
        self.pipeline.add(capsfilter_videoscale)

        nvvidconv.link(videoscale)
        videoscale.link(capsfilter_videoscale)
        capsfilter_videoscale.link(nvosd)
        nvosd.link(sink)

While, in nvosd probe function, I update the display_im into n_frame

n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
n_frame[0:, 0:] = display_im

any suggestions on how to fix it?

Thank you,

You can use nvvideoconvert to scale the video Gst-nvvideoconvert — DeepStream 6.2 Release documentation

In any pad probe function, you can not change the video resolution, format, or frame rate.

Thanks for the support @Fiona.Chen. I am able to rescale and modify the output display of the Deepstream by overwriting the n_frame buffer.

PS: when I do this, it doesn’t work.

n_frame[0:, 0:] = display_im

But doing it this way works fine. (considering n_frame and display_im resolution is same)

n_frame[0:-1, 0:-1, :] = display_im[0:-1, 0:-1, :]

Glad to know you’ve got it work! Close the topic.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.