I’ve found a weird hang I have been unable to work around. My goal is to use nvcompositor to composite a couple of cameras and video sources, while also encoding and saving the cameras with nvv4l2vp9enc. It works in a number
of ways, but starts hanging when I get exactly what I want. I’ve simplified the case to the following:
This code hangs with the following messages and only the first frame of the videotestsrc visible on the screen.
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
If I remove the line with the fake sink it works correctly. (The intention being to replace the fakesink with a video encoder). If I swap out the nvv4l2camerasrc with another videotestsrc, it works correctly. If I swap out the videotestsrc with another nvv4l2camerasrc, it works correctly. But as is, it hangs.
I’ve tried putting queues in all manner of combinations. And I’ve tried different is-live and other settings.
If I increase the latency parameter on the nvcompositor substantially, it starts working extremly slowly with large delays between frames in the video.
After tee or any demux that may fork the stream, you may use queue in front of each subpipeline. Also nvcompositor may output in RGBA. Does the following helps ?
Your changes seemed to work for this specific example, though I’m not sure why. I tried queues everywhere but seems this specific combination is what is necessary.
However, when I try to make it a bit more complicated it stops working in the same way. I’m trying to replace the fake sink with:
Hi, thanks for the effort – your suggestion does not hang, but it is not what I’m looking for.
I want the camera video to be tee’d off into both the compositor for display and an encoder to save the video. I don’t want to save the composited video – that actually works without issue if I wanted to do that. I can do a lot with the pipeline, but exactly what I’m trying to do is what hangs.
Also note: per my original post. I can get it to work fine if I use two videotestsrc elements or two nvv4l2camerasrc elements. The problem comes when I try to mix these.
Ok, got it. Your script looks correct (the queue between nvvidconv and nvv4l2vp9enc may be useless, though).
I currently only have an Orin devkit running JP-5.0.2 with an USB YUYV camera (ZED), so I use that camera with nvv4l2camerasrc (colors are obviously wrong) and I’m encoding into H265 because Orin encoders don’t currently support VP9. Also I have no virtual consoles, so just used xvimagesink in GUI.
The following script adapted from yours works fine in my case:
so it may be related your sensor driver rather than to nvcompositor.
I may not be able to further help, but telling more about your sensor and how it is connected may help other users to figure out your issue.
Per the script, all the sources have capability filters that set this explicitly.
I know all cameras are running at 45fps. I assume the videotestsrc will be too.
The nvcompositor works fine with the mixed sources until I try to record the camera sources through a tee. If it was a different framerate problem I’d expect the compositor to fail regardless of whether I’m recording the cameras.
I never found a solution to this, but I found an alternate way of getting what I want.
Instead of adding a videotestsrc with a time overlay, I’ve left those out and used a QT application on a higher DRM plane to display information. Then passed the timestamp periodically from the gstreamer application to the QT application to display the timestamp.
Thanks to those that tried to sort this out, but it’s no longer needed.