Having 2 nvdrmvideosinks for overlay and 2nd display creates weird artefacts

Hey guys,
I use GStreamer to display the output of nvarguscamerasrc to nvdrmvideosink.
The reason I use nvdrmvideosink is that I make use of nvdrmvideosink plane-id property.
It allows me to render a video in the background of a qt quick application that runs on a higher plane.

Now I have 2 other use cases where nvdrmvideosink seems to fail.

1.) I want to use it to render some static transparent svg on top of the videostream. This is what I came up with. It seems complicated but it has much better performance than using rsvgoverlay directly on my camera stream.

gst-launch-1.0 videotestsrc num-buffers=1 pattern=solid-color foreground-color=0x00000000 ! video/x-raw,width=1920,height=1080 ! rsvgoverlay location=/home/jetson/210305_Record_Logo.svg ! videoconvert ! nvvidconv ! imagefreeze ! nvdrmvideosink plane-id=2

When I do this my displayed video frames have some weird artefacts and the overall performance seems worse.

2.) I want to use one nvarguscamera stream to be display on two output displays.
I do this by using interpipe src/sink from RidgeRun, where I connect two interpipesrc to one interpipesink.

interpipesrc name=live_prev_intpsrc_2nd listen-to=cam_src is-live=true allow-renegotiation=true stream-sync=compensate-ts ! nvdrmvideosink conn-id=1 async=false sync=false

By doing this I get the same behaviour as with 1.).
So my interpretation was that having 2 nvdrmvideosinks running at the same time makes this kind of behaviour and is just too much for the jetson nano.

But having 2 nvarguscamerasrc running each one outputting to another screen with nvdrmvideosink is just working fine, so this can’t be the problem?
Maybe interpipe is the problem here? But I am not using it in 1.), so I am confused.

So I’d like to know if you had similar issues with nvdrmvideosink? If there is a nicer way to use one camera stream for 2 displays (with gstreamer). Is there a better way to overlay on Jetson Nano (with gstreamer) than this?

And how does nvdrmvideosink compares performancewise to the other available sinks like nvoverlaysink. Which one is the fastest when it comes to the latency of them all? And is it faster/better to use NvDrmRenderer directly from C++ than from GStreamer?

I would really enjoy getting some more insights or some inspiration on how to solve this in another way.

Best regards,
jb

Hi,
Please disable QT and try again. The nvdrmvideosink plugin is designed to run without any display manager. We have X11 enabled in default release and for using nvdrmvideosink, have to disable it first. The steps are in document:
https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/accelerated_gstreamer.html#wwpID0E02P0HA

Would suggest try on the without-display-manager system.

Oh sorry, I think you did not understand right. The system is running without display manager and the Qt part is not the problem! (Running Qt above nvdrmvideosink works perfect)

I set this to run without display manager:

sudo systemctl set-default multi-user.target

Then I start Qt like this:
unset DISPLAY
export QT_QPA_PLATFORM=eglfs
export QT_QPA_EGLFS_INTEGRATION=eglfs_kms_egldevice
export QT_QPA_EGLFS_KMS_PLANE_INDEX=2

But this is not the part I am asking about because this works just fine!

Can you give me a sample pipeline on how to display from one nvarguscamerasrc to 2 nvdrmvideosinks in a good low latency way?

Hi,
Please try with tee plugin. The following command works on Jetson Nano devkit:

$ gst-launch-1.0 videotestsrc ! nvvidconv ! 'video/x-raw(memory:NVMM)' ! tee name=t t. ! queue ! nvdrmvideosink conn-id=1 t. ! queue ! nvdrmvideosink conn-id=0

It should work if you replace videotestsrc with nvarguscamerasrc.