Nvcompositor hang

Hello,

I’ve found a weird hang I have been unable to work around. My goal is to use nvcompositor to composite a couple of cameras and video sources, while also encoding and saving the cameras with nvv4l2vp9enc. It works in a number
of ways, but starts hanging when I get exactly what I want. I’ve simplified the case to the following:

#!/bin/bash

WIDTH=1280
HEIGHT=720
FRAMERATE=45
PADFMT_BASE="video/x-raw(memory:NVMM),framerate=$FRAMERATE/1"
PADFMT_RGBA="$PADFMT_BASE,format=RGBA"
PADFMT_SRC="$PADFMT_BASE,width=$WIDTH,height=$HEIGHT,format=UYVY"
    
gst-launch-1.0 -e \
    nvv4l2camerasrc device=/dev/video0 ! "$PADFMT_SRC" ! \
        tee name=src1 \
    src1. ! fakesink \
    src1. ! \
        nvvidconv  ! \
        "$PADFMT_RGBA" ! \
        queue ! \
        mux.sink_0 \
    videotestsrc is-live=true ! \
        "video/x-raw,width=1920,height=120,framerate=$FRAMERATE/1,format=RGBA" ! \
        nvvidconv ! \
        "$PADFMT_RGBA" ! \
        queue ! \
        mux.sink_1 \
    nvcompositor name=mux sink_0::xpos=0    sink_0::ypos=0  \
                          sink_1::xpos=0    sink_1::ypos=960 ! \
        queue ! nvdrmvideosink

This code hangs with the following messages and only the first frame of the videotestsrc visible on the screen.

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...

If I remove the line with the fake sink it works correctly. (The intention being to replace the fakesink with a video encoder). If I swap out the nvv4l2camerasrc with another videotestsrc, it works correctly. If I swap out the videotestsrc with another nvv4l2camerasrc, it works correctly. But as is, it hangs.

I’ve tried putting queues in all manner of combinations. And I’ve tried different is-live and other settings.

If I increase the latency parameter on the nvcompositor substantially, it starts working extremly slowly with large delays between frames in the video.

I’ve tried the patch noted on other nvcompositor threads in this forum:
https://elinux.org/Jetson/L4T/r32.6.x_patches
But it made no difference.

Any help would be greatly appreciated!

After tee or any demux that may fork the stream, you may use queue in front of each subpipeline. Also nvcompositor may output in RGBA. Does the following helps ?

gst-launch-1.0 -e \
    nvv4l2camerasrc device=/dev/video0 ! "$PADFMT_SRC" ! \
        tee name=src1 \
    src1. ! queue ! fakesink \
    src1. ! queue ! \
        nvvidconv  ! \
        "$PADFMT_RGBA" ! \
        queue ! \
        mux.sink_0 \
    videotestsrc is-live=true ! \
        "video/x-raw,width=1920,height=120,framerate=$FRAMERATE/1,format=RGBA" ! \
        nvvidconv ! \
        "$PADFMT_RGBA" ! \
        queue ! \
        mux.sink_1 \
    nvcompositor name=mux sink_0::xpos=0    sink_0::ypos=0  \
                          sink_1::xpos=0    sink_1::ypos=960 \
       ! nvvidconv ! nvdrmvideosink

Thanks for the hints.

Your changes seemed to work for this specific example, though I’m not sure why. I tried queues everywhere but seems this specific combination is what is necessary.

However, when I try to make it a bit more complicated it stops working in the same way. I’m trying to replace the fake sink with:

    src1. ! queue ! \
        nvvidconv ! "video/x-raw(memory:NVMM), format=I420" ! \
        queue ! nvv4l2vp9enc bitrate=16000000 ! queue ! \
        webmmux ! filesink location=/data/test.webm \

This freezes in the exact same way as before.

Simply trying this:

    src1. ! queue ! \
        nvvidconv ! "video/x-raw(memory:NVMM), format=I420" ! \
        fakesink

Also freezes.

It seems to be very fussy and not very comprehensible as to why.

You would specify caps after nvcompositor to be RGBA, such as :

gst-launch-1.0 -ev  videotestsrc is-live=true ! video/x-raw,width=1920,height=960,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! queue ! mux.sink_0    videotestsrc is-live=true pattern=ball ! video/x-raw,width=1920,height=120,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! queue ! mux.sink_1     nvcompositor name=mux sink_0::xpos=0    sink_0::ypos=0  sink_1::xpos=0    sink_1::ypos=960  ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420' ! nvv4l2vp9enc bitrate=16000000 ! queue ! webmmux ! filesink location=test.webm

Hi, thanks for the effort – your suggestion does not hang, but it is not what I’m looking for.

I want the camera video to be tee’d off into both the compositor for display and an encoder to save the video. I don’t want to save the composited video – that actually works without issue if I wanted to do that. I can do a lot with the pipeline, but exactly what I’m trying to do is what hangs.

Also note: per my original post. I can get it to work fine if I use two videotestsrc elements or two nvv4l2camerasrc elements. The problem comes when I try to mix these.

Adding caps to my script changes nothing:

#!/bin/bash

WIDTH=1280
HEIGHT=720
FRAMERATE=45
PADFMT_BASE="video/x-raw(memory:NVMM),framerate=$FRAMERATE/1"
PADFMT_RGBA="$PADFMT_BASE,format=RGBA"
PADFMT_SRC="$PADFMT_BASE,width=$WIDTH,height=$HEIGHT,format=UYVY"

gst-launch-1.0 -e \
    nvv4l2camerasrc device=/dev/video0 ! "$PADFMT_SRC" ! \
        tee name=src1 \
    src1. ! queue ! \
        nvvidconv ! "video/x-raw(memory:NVMM), format=I420" ! \
        queue ! nvv4l2vp9enc bitrate=16000000 ! queue ! \
        webmmux ! filesink location=/data/test.webm \
    src1. ! queue ! \
        nvvidconv  ! \
        "$PADFMT_RGBA" ! \
        queue ! \
        mux.sink_0 \
    videotestsrc is-live=true ! \
        "video/x-raw,width=1920,height=120,framerate=$FRAMERATE/1,format=RGBA" ! \
        nvvidconv ! \
        "$PADFMT_RGBA" ! \
        queue ! \
        mux.sink_1 \
    nvcompositor name=mux sink_0::xpos=0    sink_0::ypos=0  \
                          sink_1::xpos=0    sink_1::ypos=960 ! \
        "video/x-raw(memory:NVMM),format=RGBA" ! \
        nvvidconv ! \
        "video/x-raw(memory:NVMM), format=I420" ! \
        nvdrmvideosink

Ok, got it. Your script looks correct (the queue between nvvidconv and nvv4l2vp9enc may be useless, though).
I currently only have an Orin devkit running JP-5.0.2 with an USB YUYV camera (ZED), so I use that camera with nvv4l2camerasrc (colors are obviously wrong) and I’m encoding into H265 because Orin encoders don’t currently support VP9. Also I have no virtual consoles, so just used xvimagesink in GUI.

The following script adapted from yours works fine in my case:

#/bin/bash

set -x

WIDTH=2560
HEIGHT=720
FRAMERATE=60
PADFMT_BASE="video/x-raw(memory:NVMM),framerate=$FRAMERATE/1"
PADFMT_RGBA="$PADFMT_BASE,format=RGBA"
PADFMT_SRC="$PADFMT_BASE,width=$WIDTH,height=$HEIGHT,format=UYVY"


gst-launch-1.0 -e \
nvv4l2camerasrc device=/dev/video0 ! "$PADFMT_SRC" ! tee name=src1 \
   src1. ! queue ! nvvidconv ! "video/x-raw(memory:NVMM),format=I420" ! nvv4l2h265enc ! h265parse ! matroskamux ! filesink location=test_h265.mkv  \
   src1. ! queue ! nvvidconv ! "$PADFMT_RGBA" ! queue ! mux.sink_0 \
videotestsrc is-live=true ! "video/x-raw,width=1920,height=120,framerate=$FRAMERATE/1" ! nvvidconv ! "$PADFMT_RGBA" ! queue ! mux.sink_1 \
nvcompositor name=mux sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=0 sink_1::ypos=960 ! "video/x-raw(memory:NVMM),format=RGBA" ! nvvidconv ! xvimagesink

so it may be related your sensor driver rather than to nvcompositor.
I may not be able to further help, but telling more about your sensor and how it is connected may help other users to figure out your issue.

Thanks for the test! I tried with your h265 pipeline and found the same issue.

I didn’t try with xvimagesink, but that would not be useful to me if it did work.

I had put queues all over the place to try to get this working, so I’m not surprised there are a few useless ones.

My setup is with a Xavier NX. Connect Tech’s Rudi NX platform with three NileCam21’s connected via GMSL over the CSI-2 link.

Software is:

# R32 (release), REVISION: 7.2, GCID: 30192233, BOARD: t186ref, EABI: aarch64, DATE: Sun Apr 17 09:53:50 UTC 2022

Hi,
Please increase buffer number for a try:

gst-launch-1.0 -e \
nvv4l2camerasrc device=/dev/video0 ! "$PADFMT_SRC" ! nvvidconv output-buffers=8 ! "video/x-raw(memory:NVMM),format=RGBA" ! tee name=src1 \
   src1. ! queue ! nvvidconv ! "video/x-raw(memory:NVMM),format=I420" ! nvv4l2h265enc ! h265parse ! matroskamux ! filesink location=test_h265.mkv  \
   src1. ! queue ! nvvidconv ! "$PADFMT_RGBA" ! queue ! mux.sink_0 \
videotestsrc is-live=true ! "video/x-raw,width=1920,height=120,framerate=$FRAMERATE/1" ! nvvidconv ! "$PADFMT_RGBA" ! queue ! mux.sink_1 \
nvcompositor name=mux sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=0 sink_1::ypos=960 ! "video/x-raw(memory:NVMM),format=RGBA" ! nvvidconv ! xvimagesink

Check if it helps by setting output-buffers=8:

nvv4l2camerasrc device=/dev/video0 ! "$PADFMT_SRC" ! nvvidconv output-buffers=8 ! "video/x-raw(memory:NVMM),format=RGBA" ! tee name=src1 ! ...

Hi,
Please set sync=0 to xvimagesink and try again:

... ! xvimagesink sync=0

Thanks for the suggestions.

I added output-buffers to the nvvidconv instances and it didn’t seem to change anything.

I’m not using xvimagesink; but adding sync=False to the nvdrmimagesink did not change anything either.

Hi,
So this may be specific to the v4l2 device. We have tried with Logitech USB camera C165 and AVerMedia CAM513, and don’t hit the issue.

Is the v4l2 source in 30fps? Or it has different frame rate? Probably the issue is because the sources do not have identical frame rate.

I’m running with a 45fps frame rate.

Per the script, all the sources have capability filters that set this explicitly.

I know all cameras are running at 45fps. I assume the videotestsrc will be too.

The nvcompositor works fine with the mixed sources until I try to record the camera sources through a tee. If it was a different framerate problem I’d expect the compositor to fail regardless of whether I’m recording the cameras.

I never found a solution to this, but I found an alternate way of getting what I want.

Instead of adding a videotestsrc with a time overlay, I’ve left those out and used a QT application on a higher DRM plane to display information. Then passed the timestamp periodically from the gstreamer application to the QT application to display the timestamp.

Thanks to those that tried to sort this out, but it’s no longer needed.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.