I use GStreamer to display the output of nvarguscamerasrc to nvdrmvideosink.
The reason I use nvdrmvideosink is that I make use of nvdrmvideosink plane-id property.
It allows me to render a video in the background of a qt quick application that runs on a higher plane.
Now I have 2 other use cases where nvdrmvideosink seems to fail.
1.) I want to use it to render some static transparent svg on top of the videostream. This is what I came up with. It seems complicated but it has much better performance than using rsvgoverlay directly on my camera stream.
gst-launch-1.0 videotestsrc num-buffers=1 pattern=solid-color foreground-color=0x00000000 ! video/x-raw,width=1920,height=1080 ! rsvgoverlay location=/home/jetson/210305_Record_Logo.svg ! videoconvert ! nvvidconv ! imagefreeze ! nvdrmvideosink plane-id=2
When I do this my displayed video frames have some weird artefacts and the overall performance seems worse.
2.) I want to use one nvarguscamera stream to be display on two output displays.
I do this by using interpipe src/sink from RidgeRun, where I connect two interpipesrc to one interpipesink.
interpipesrc name=live_prev_intpsrc_2nd listen-to=cam_src is-live=true allow-renegotiation=true stream-sync=compensate-ts ! nvdrmvideosink conn-id=1 async=false sync=false
By doing this I get the same behaviour as with 1.).
So my interpretation was that having 2 nvdrmvideosinks running at the same time makes this kind of behaviour and is just too much for the jetson nano.
But having 2 nvarguscamerasrc running each one outputting to another screen with nvdrmvideosink is just working fine, so this can’t be the problem?
Maybe interpipe is the problem here? But I am not using it in 1.), so I am confused.
So I’d like to know if you had similar issues with nvdrmvideosink? If there is a nicer way to use one camera stream for 2 displays (with gstreamer). Is there a better way to overlay on Jetson Nano (with gstreamer) than this?
And how does nvdrmvideosink compares performancewise to the other available sinks like nvoverlaysink. Which one is the fastest when it comes to the latency of them all? And is it faster/better to use NvDrmRenderer directly from C++ than from GStreamer?
I would really enjoy getting some more insights or some inspiration on how to solve this in another way.