Gstreamer using nvcomposer to filesink hangs, but nvcomposer to nvoverlaysink works fine

So I’ve finally been able to get the output I want using hardware acceleration. I’m trying to output the overlay result to a file (and eventually RTMP). But I’m getting stuck…

This works to display on the display:

gst-launch-1.0 \
v4l2src io-mode=2 device=/dev/video0 ! "image/jpeg,width=1920,height=1080, framerate=30/1" ! nvjpegdec ! video/x-raw ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_0  \
v4l2src io-mode=2 device=/dev/video1 ! "image/jpeg,width=640,height=360, framerate=30/1" ! nvjpegdec ! video/x-raw ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_1 \
nvcompositor name=comp sink_1::xpos=1270 sink_1::ypos=10 sink_2::xpos=50 sink_2::ypos=50 ! \
nvoverlaysink 

When I try and do something with the nvcompositor, like save to a file, with this command:

gst-launch-1.0 \
v4l2src io-mode=2 device=/dev/video0 ! "image/jpeg,width=1920,height=1080, framerate=30/1" ! nvjpegdec ! video/x-raw ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_0  \
v4l2src io-mode=2 device=/dev/video1 ! "image/jpeg,width=640,height=360, framerate=30/1" ! nvjpegdec ! video/x-raw ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_1 \
nvcompositor name=comp sink_1::xpos=1270 sink_1::ypos=10 sink_2::xpos=50 sink_2::ypos=50 ! \
nvvidconv ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! h264parse ! qtmux ! filesink location=test_MJPG_H264enc_video1_rgba.mp4 -e

It hangs at this:

Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
Redistribute latency...
H264: Profile = 100, Level = 0 
Redistribute latency...
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 

Any ideas on what to try? Looking at these two threads, it should be possible:

Hi,
We have verified it working and provided it in

Please add num-buffers=100 to v4l2src to terminate the pipeline after capturing 100 frames of each source, and check if the saved mp4 is valid.

Edit: Sorry, I deleted my post from earlier today because I realized I didn’t reply to you.

Running this:

gst-launch-1.0 \
v4l2src num-buffers=100 io-mode=2 device=/dev/video0 ! "image/jpeg,width=1920,height=1080, framerate=30/1" ! nvjpegdec ! video/x-raw ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_0  \
v4l2src num-buffers=100 io-mode=2 device=/dev/video1 ! "image/jpeg,width=640,height=360, framerate=30/1" ! nvjpegdec ! video/x-raw ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_1 \
nvcompositor name=comp sink_1::xpos=1270 sink_1::ypos=10 sink_2::xpos=50 sink_2::ypos=50 ! \
nvvidconv ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! h264parse ! qtmux ! filesink location=test_MJPG_H264enc_video1_rgba.mp4 -e

I get this:

gst-launch-1.0 \
> v4l2src num-buffers=100 io-mode=2 device=/dev/video0 ! "image/jpeg,width=1920,height=1080, framerate=30/1" ! nvjpegdec ! video/x-raw ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_0  \
> v4l2src num-buffers=100 io-mode=2 device=/dev/video1 ! "image/jpeg,width=640,height=360, framerate=30/1" ! nvjpegdec ! video/x-raw ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_1 \
> nvcompositor name=comp sink_1::xpos=1270 sink_1::ypos=10 sink_2::xpos=50 sink_2::ypos=50 ! \
> nvvidconv ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! h264parse ! qtmux ! filesink location=test_MJPG_H264enc_video1_rgba.mp4 -e
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock


Redistribute latency...
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
Redistribute latency...
H264: Profile = 100, Level = 0 
NvMMLiteOpen : Block : BlockType = 4 
Redistribute latency...
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
Got EOS from element "pipeline0".
Execution ended after 0:00:07.295591751
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

The output is a file less than a second long with a very odd resolution of 1920 x 370 and only one camera input. Attached are the stream details from VLC and a screen shot of the video output.

Screen Shot 2020-10-06 at 11.08.58 AM

Also if I paste the code that was confirmed to work from the thread you posted, I get a syntax error which is odd… I would expect a different error relating to the fact I don’t have that video source in that format… Did something change in gstreamer syntax or nvidia’s workflow? The threads seem to be from a while ago.

 gst-launch-1.0 -e v4l2src device=/dev/video1 ! queue ! ‘video/x-raw, format=(string)YUY2, width=(int)1920, height=(int)1080, framerate=(fraction)30/1’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_0 v4l2src device=/dev/video0 ! queue ! ‘video/x-raw, framerate=(fraction)30/1’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! comp.sink_1 nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=960 sink_0::height=540 sink_1::xpos=960 sink_1::ypos=0 sink_1::width=960 sink_1::height=540 ! nvvidconv ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! h264parse ! mux. pulsesrc device=“alsa_input.usb-VXIS_Inc_ezcap_U3_capture-02.analog-stereo” ! ‘audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=44100, channels=(int)2’ ! queue ! audioconvert ! voaacenc ! aacparse ! mpegtsmux name=mux ! filesink location=“pip3.ts”
-bash: syntax error near unexpected token `('

Hi,
It looks wrong to set sink_2 in nvcompositor. You have one source to sink_0 and the other to sink_1. Should set sink_0 instead of sink_2. And you may try to put the two source side-by-side.

Good catch. Surprised no error was thrown for something like that. Still didn’t solve the problem however. Hangs in the same place as shown below:

gst-launch-1.0 \
> v4l2src io-mode=2 device=/dev/video0 ! "image/jpeg,width=1920,height=1080, framerate=30/1" ! nvjpegdec ! video/x-raw ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_0  \
> v4l2src io-mode=2 device=/dev/video1 ! "image/jpeg,width=640,height=360, framerate=30/1" ! nvjpegdec ! video/x-raw ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_1 \
> nvcompositor name=comp sink_1::xpos=1270 sink_1::ypos=10 sink_0::xpos=50 sink_0::ypos=50 ! \
> nvvidconv ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! h264parse ! qtmux ! filesink location=test_MJPG_H264enc_video1_rgba.mp4 -e
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
Redistribute latency...
H264: Profile = 100, Level = 0 
Redistribute latency...
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4

Hi,
Without setting num-buffers, the pipeline will keep running and never stops. For generating a valid mp4, you would need to set the property. If you would like ot exit by pressing Ctrl+C, suggest you use matroskamux to generate mkv file.

Perhaps I’m not sufficiently explaining what I mean by hanging. If I run the command with nvoverlaysink, the output stops at Redistribute latency...and I get the display shown properly. As soon as I try to write it a file, it gets “stuck” at “NvMMLiteBlockCreate : Block : BlockType = 4”.

Whether I set num-buffersor not (If I hit ctrl-c after 1 minute), I only get a short file up under 1 second with deformed video properties shown here:

Screen Shot 2020-10-06 at 11.08.58 AM

Hi,
We try with videotestsrc and the mp4 looks OK. Please try

gst-launch-1.0 \
videotestsrc is-live=1 num-buffers=100 ! 'video/x-raw,width=1920,height=1080,framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_0  \
videotestsrc is-live=1 num-buffers=100 pattern=1 ! 'video/x-raw,width=640,height=480,framerate=30/1' !  nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! comp.sink_1 \
nvcompositor name=comp sink_1::xpos=1270 sink_1::ypos=10 sink_0::xpos=50 sink_0::ypos=50 ! \
nvvidconv ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! h264parse ! qtmux ! filesink location=videotestsrc_rgba.mp4 -e

Probably certain issue is in the v4l2 sources. Do you get expect result in using nvoverlaysink?

The command you provided works.

The command I run with nvoverlaysink with my cameras also works.

What do you think the issue is with the source that makes a difference between nvoverlaysink vs. filesink? I’m doing all the right conversions I believe and am unsure why nvoverlaysink works but not filesink.

Hi,
Do the sources support MJPEG only or other YUV422(such as YUYV or UYVY) formats? In general, USB cameras support certain YUV422 format. If there is other format, it may worth a try.

Please refer to Jetson Nano FAQ:
Q: I have a USB camera. How can I lauch it on Jetson Nano?

1 Like

My cameras are MJPEG. The reason I purchased the jetson was because, compared to the Pi, I would have figured that the GPU would make MJPEG decoding/h264 encoding process much less CPU intensive. Because cost is a concern, being able to deploy cameras that are MJPEG ( 3-4x cheaper for desired frame rates) vs YUV422 is important; I figured the nano would be perfect.

I was able to get this to work with the omxh264enc encoder. But the CPU usage is high based on the pipeline to get it work without the nv decoders/convert/encoders. Running the same pipeline on the Pi 4, gives me less CPU usage. But both don’t really work for my use case. So I’ll have to re-think the architecture. It’s a bit disappointing. All I’m trying to do is compose a picture in picture with a 1920x1080 source and a 640x480 source.

I will say, there is definitely some sort of bug in either nvcompositor or nvv4l2h264enc however. If I stop using them, my pipeline works (though not fully utilizing GPU and getting to 90% CPU usage). Something worth looking at on your end. I appreciate the help though.

Hi,
Please replace with attached prebuilt lib and set background-w, background-h:

... ! nvcompositor background-w=2560 background-h=1440 ... ! ...

We have tried with two E-Con CU135 and it works fine. Please give it a try.

r32_43_TEST_libgstnvcompositor.zip (12.2 KB)

Thank you for still working on it! I will test once I get back to the MJPEG setups I was using.