R32.3.1 tx2-4g how to control nvoverlaysink

@Honey_Patouceul can I control putting another window over the nvoverlaysink display?

Does the face recognition stuff put the boxes in the video or does it have a second window over the video where the boxes are displayed?

I need to ASAP put a circle in the middle of my video, used for aiming…

It appears the nvoverlaysink is overwriting the current screen that is on the display, I need to have nvoverlaysink on the bottom and a second window over it…

Is this possible, and how do I do it.


Hello Terry,

Let me try to address your question in different parts.

  1. Is it possible to overlay elements after the nvoverlaysink element displays the video?

What happens is that Sink elements are meant to be the end of a pipeline. They work as interfaces between GStreamer and other subsystems. For instance, nvoverlaysink servers as an interface between GStreamer and the display, capable of managing NVMM memory. So unfortunately there is no way to manipulate the buffer on a pipeline, once it get consumed by the nvoverlaysink element. If you would like more information regarding GStreamer elements, I would suggest you to take a look at the following guide:


  1. How does the AI samples put boxes over the video?

Now that we know that it is not possible to modify video buffers after displayed by the nvoverlaysink element, we can now deduce that those boxes are being rendered on the video before being consumed by the sink element. In the case of NVIDIA Deepstream, the element that draws the boxes and labels is nvdsosd. Once the video buffers reach the nvoverlaysink, they already have the overlaying objects. For an example, you can loot at this wiki:


  1. How can I then get a circle to rendered over my video stream?

There is actually different options. And it all depends on what is your end goal, your system restrictions and available resources. I will mention three possible approaches:

a) If you are creating your own GStreaner application, you can take a look at the GstVideoOverlay interface. It allows you to use GTK+ and QT the same window your sink element is using. You would have to make sure is possible to either grab the nvoverlaysink window or to force the element to use a specific window. Also you can instead use a different sink element, such as ximagesink or xvimagesink, which are supported according to the GstVideoOverlay documentation. If you wish to take a deeper look, please use this link:


If you are looking for a easy to use solution, that you can start using right away on your GStreamer pipeline, please check options b and c.

b) At RidgeRun engineering we offer the Fast GStreamer Overlay Element. It gives you the ability of overlaying simple objects into the video stream. Images, text and/or time and date are within the elements capabilities. The downside is that the element does not support NVMM, which would require you to use an nvvidconv before and after the element to use it with nvoverlaysink. Or use only one nvvidconv before the element and a different video sink, such as xvimagesink. For further information, pleas check our site:

c) We offer an even more powerful element. The QT Overlay element, allows you to embed more complex objects into the video stream, such as animations. It is off course also capable of the more simple objects. Another big advantage of this element is that it supports NVMM memory, which means that it would not require any memory conversion to use it with nvoverlaysink. For more information, please visit:

best regards,
Andres Campos

1 Like

@andres.campos Thanks for the response, great information, I currently use emboverlay your gstreamer element and am really happy with it.

I guess this topic is more of a question about nvoverlaysink, a Nvidia product, that basically appears to take over a section of the display and does not play with other xwindows.

I converted from nvoverlaysink to nveglglessink. nvegglessink is a sink/window that plays by the xwindow rules, we were able to raise another qt window over the nveglglessin window and can now overlay images over the video.

But your info on AI embedded info in videos was helpful.


Thanks for your answer. I guess I misunderstood your question. Its great to hear that you have already found a solution to the issue by using nveglglessink. I am also very happy that the information regarding AI is useful for you.

Andres Campos,

For this usecase, we suggest use nvcompositor to composite the surfaces into single plane and do frame rendering.

As @DaneLLL mentioned, your best choice for this would be rendering on video frames with nvcompositor. Be aware that using nvvidconv may result in rendering on black background, but this seems working (you should see the ball moving in cyan bars having alpha=0):

gst-launch-1.0 nvcompositor name=comp sink_0::zorder=2 sink_1::zorder=1 ! nvvidconv ! nvoverlaysink    videotestsrc ! alpha method=custom target-b=255 ! video/x-raw,format=RGBA ! comp.sink_0    videotestsrc pattern=ball ! video/x-raw,format=RGBA ! nvvidconv ! video/x-raw,format=RGBA ! comp.sink_1

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.