I am having troubles to change the output display n_frame information at will.
For example, I am working on face recognition task. Once I recognize the face, I want to show only the recognized face on the display at a position (x, y) with some text information related to the recognized person. I have all such information in user_meta and global variable now.
What plugins I should use to have access/modify on the output frame? any references I can check on how to do that?
Thank you,
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson • DeepStream Version 6.2 • JetPack Version (valid for Jetson only) 5.1.1 • TensorRT Version 8.5.2.2 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
You can use GStreamer plugin pad probe to get the NvDsMeta from the GstBuffer and modify the content. The downstream elements will get the modified metadata.
Most of our DeepStream sample apps have pad probe functions.
In my pipeline, I have face detection and recognition. Once the face recognition done, it provides the name of the person and I get the face from my own database based on the name of the person. With this information, I created an image (numpy array) by pasting the face at location x, y and name of the person using opencv library.
Now, I want that image to be shown as display instead of input video with bounding boxes.
What is the numpy array? If it is an image, what is the format? RGB or YUV?
Its a RGB image created using opencv. Dtype: uint8.
Currently DeepStream does not support image paste(image osd). If you know the image format, you may try to implement the image osd function with nvdsosd plugin(the plugin is open source).
I am trying to rescale by adding videoscale and capsfilter_videoscale the output of an nvvidconv element, but it hangs always.
# Use convertor to convert from NV12 to RGBA as required by nvosd
nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
if not nvvidconv:
sys.stderr.write(" Unable to create nvvidconv \n")
nvvidconv.set_property("gpu_id", self.opts.GPU_ID)
self.pipeline.add(nvvidconv)
# creating and setting the videoscale and capsfilter elements to resize the video
videoscale = Gst.ElementFactory.make("videoscale", "myscale")
self.pipeline.add(videoscale)
capsfilter_videoscale = Gst.ElementFactory.make("capsfilter", "mycapsfilter")
new_caps = Gst.Caps.from_string(
f"video/x-raw,width={self.display_im.shape[1]},height={self.display_im.shape[0]}")
capsfilter_videoscale.set_property("caps", new_caps)
self.pipeline.add(capsfilter_videoscale)
nvvidconv.link(videoscale)
videoscale.link(capsfilter_videoscale)
capsfilter_videoscale.link(nvosd)
nvosd.link(sink)
While, in nvosd probe function, I update the display_im into n_frame