NvCompositor SIGSEGV

Hello,

I’m facing an issue with the use of the nvcompositor gstreamer element on my Nvidia Jetson Tegra NX.

I’m trying to merge two rtsp h264 streams into one and have a side by side view. I successfuly displayed it using two dummy video streams (videotestsrc element) and also using two instances of the same h264 stream.

When I connect two H264 cameras i get a SIGSEGV error.

Here is my pipeline:

gst-launch-1.0 nvcompositor interpolation-method=4 name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=960 sink_0::height=540 sink_1::xpos=960 sink_1::ypos=0 sink_1::width=960 sink_1::height=540 sink_2::xpos=0 sink_2::ypos=540 sink_2::width=960 sink_2::height=540 sink_3::xpos=960 sink_3::ypos=540 sink_3::width=960 sink_3::height=540 ! nvvidconv ! nv3dsink rtspsrc location=rtsp://**** latency=0 is-live=true !  rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! queue2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA ! comp.sink_0 rtspsrc location=rtsp://**** latency=0 is-live=true ! rtph264depay ! h264parse ! nvv4l2decoder enable-max-performance=1 ! queue2 ! nvvidconv ! video/x-raw\(memory:NVMM\),format=RGBA ! comp.sink_1

Here is the output:

Setting pipeline to PAUSED …
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Pipeline is live and does not need PREROLL …
Progress: (open) Opening Stream
Progress: (open) Opening Stream
Pipeline is PREROLLED …
Prerolled, waiting for progress to finish…
Progress: (connect) Connecting to rtsp://admin:ECA123456@172.43.40.11:554/Streaming/Channels/101
Progress: (connect) Connecting to rtsp://admin:123456@172.43.80.204:554/h264
Progress: (open) Retrieving server options
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING …
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) SETUP stream 1
Progress: (request) Sending PLAY request
Progress: (open) Opened Stream
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
Progress: (request) Sent PLAY request
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Caught SIGSEGV
Spinning. Please run ‘gdb gst-launch-1.0 9375’ to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.

Hi,
Please apply this patch for a try:
Jetson/L4T/r32.6.x patches - eLinux.org

[gstreamer] patch for running v4l2src + nvcompositor

With the patch, you can set fixed background width an height. Please give it a try and see if the SIGSEGV still happens.

Thanks for your reply.

I’m really not sure how to apply this patch, can you point me out how to do it ?

Hi,
The plugins are open source and you can build them manually. If you use Jetpack 4.6.2(r32.7.2), please download the source code from:
https://developer.nvidia.com/embedded/linux-tegra-r3272

L4T Driver Package (BSP) Sources

And follow the guidance to build it.

Hi,

I managed to patch nvcompositor but I’m still getting the SIGSEV.

After further investigations with gdb, it seems that the function causing the SIGSEV is the “get_nvcolorformat()” and particularly the line switch (GST_VIDEO_INFO_FORMAT (info))

To confirm it, I tried commenting the line calling it : if(!get_nv_colorformat (&vaggpad->info, &nvcompositor_pad->comppad_pix_fmt) and my pipeline is now working (but i’m getting a lot of GST_ERROR)

Error displayed are :

videometa gstvideometa.c:240:default_map: plane 1, no memory at offset 2088960
default video-frame.c:168:gst_video_frame_map_id: failed to map video frame plane 1

Please tell me if you need further infos.

Thanks for your help !

Hi,
We have not seen this sort of issue. Does it work if you launch one RTS source? If the issue is related to color format, launching single camera should also fail.

Hi,

I tried to connect only one camera with the patched nvcompositor (including the if statement) and it is working fine. Adding an other rtsp source (which is different from the first one) makes it crash with SIGSEV.

Thanks for your help.

Hi,
Please set background-w and background-h to a bit larger than desired resolution and try xvimagesink like:

... ! nvvidconv ! video/x-raw,format=I420 ! xvimagesink

See if this works.

I’m still getting the SIGSEV error.

Here is my pipeline :

gst-launch-1.0 nvcompositor background-w=1920 background-h=1080 name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=960 sink_0::height=540 sink_1::xpos=960 sink_1::ypos=0 sink_1::width=960 sink_1::height=540 sink_2::xpos=0 sink_2::ypos=540 sink_2::width=960 sink_2::height=540 sink_3::xpos=960 sink_3::ypos=540 sink_3::width=960 sink_3::height=540 ! nvvidconv ! ‘video/x-raw,format=I420’ ! xvimagesink rtspsrc location=rtsp://***** latency=0 ! rtph264depay ! nvv4l2decoder ! queue2 ! comp.sink_0 rtspsrc location=rtsp://****** latency=0 ! rtph264depay ! nvv4l2decoder ! queue2 ! comp.sink_1

Hi,
Do you have USB cameras for a try? The patch shall work for the use-case. Would like to know if it works for USB cameras and the failure is specific to RTSP sources.

I don’t have one for now but i’ll be able to try with MIPI and USB cameras in a few days. I’ll keep you updated ASAP.

Thanks for your help !

Hi,
For comparison, you may also try RTSP server launched through test-launch. Please refer to
Jetson Nano FAQ

Q: Is there any example of running RTSP streaming?

To know if it works with other kind of RTSP source.

I also get the fault when both inputs are reading from the same RTSP server, but the following works fine with 2 RTSP servers:

gst-launch-1.0 \
nvcompositor name=comp \
    sink_0::xpos=0 sink_0::ypos=0 sink_0::width=960 sink_0::height=540 \
    sink_1::xpos=960 sink_1::ypos=0 sink_1::width=960 sink_1::height=540 \
    sink_2::xpos=0 sink_2::ypos=540 sink_2::width=960 sink_2::height=540 \
    sink_3::xpos=960 sink_3::ypos=540 sink_3::width=960 sink_3::height=540 \
  ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvvidconv ! 'video/x-raw,format=I420' ! xvimagesink  \
rtspsrc location=rtsp://127.0.0.1:8554/test latency=0 ! rtph264depay ! nvv4l2decoder ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! queue ! comp.sink_0 \
rtspsrc location=rtsp://127.0.0.1:8556/test latency=0 ! rtph264depay ! nvv4l2decoder ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! queue ! comp.sink_1

For reference, I created the RTSP servers with:

./test-launch -p 8554 "videotestsrc ! nvvidconv ! video/x-raw(memory:NVMM),width=960, height=540, framerate=30/1 ! nvv4l2h264enc maxperf-enable=1 insert-sps-pps=1 insert-vui=1 idrinterval=15 ! h264parse ! rtph264pay name=pay0"

and

./test-launch -p 8556 "videotestsrc pattern=ball ! nvvidconv ! video/x-raw(memory:NVMM),width=960, height=540, framerate=30/1 ! nvv4l2h264enc maxperf-enable=1 insert-sps-pps=1 insert-vui=1 idrinterval=15 ! h264parse ! rtph264pay name=pay0"

Are you trying to read from the same camera ? If yes, it may improve reading from 2 different IP cameras.

EDIT: I can read from same RTSP uri when using uridecodebin. Using uridecodebin in one input and rtspsrc in the other input also segfaults.

Better try:

gst-launch-1.0 \
nvcompositor name=comp \
    sink_0::xpos=0 sink_0::ypos=0 sink_0::width=960 sink_0::height=540 \
    sink_1::xpos=960 sink_1::ypos=0 sink_1::width=960 sink_1::height=540 \
    sink_2::xpos=0 sink_2::ypos=540 sink_2::width=960 sink_2::height=540 \
    sink_3::xpos=960 sink_3::ypos=540 sink_3::width=960 sink_3::height=540 \
  ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvvidconv ! 'video/x-raw,format=I420' ! xvimagesink \
uridecodebin uri=rtsp://127.0.0.1:8554/test source::latency=0 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! queue ! comp.sink_0 \
uridecodebin uri=rtsp://127.0.0.1:8556/test source::latency=0 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! queue ! comp.sink_1

I tried with one of my RTSP camera and a dummy rtsp stream using rtsp server and videotestsrc, it either :

  • Crash at launch with SIGSEV
  • Launches fine but the dummy stream is bugged (freeze / poor quality)

@Honey_Patouceul Using uridecodebin I’m experiencing the same issues.

You may tell if the suggested command above with uridecodebin works with the 2 local RTSP servers.
If yes and it just fails when using remote IP camera and/or remote dummy RTSP server, you may try both:

  • increasing latency to 1000 ms
  • using TCP transport: change uri into rtspt://<your_uri>, or with rtspsrc you may try property protocols=tcp.

With the 2 local RTSP servers, nvcompositor doesn’t crash but the output is freezed; adding latency resolves the issue.

I tried increasing latency and/or using tcp transport with the IP cameras but nvcompositor is still crashing.

Thanks for your help.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

Hi,
There is race condition for multiple RTSP sources. Please apply the patch to gstnvcompositor.c and try:

@@ -772,7 +772,11 @@ gst_nvcompositor_fixate_caps (GstAggregator * agg, GstCaps * caps)
     nvcompositor_pad->input_width = GST_VIDEO_INFO_WIDTH (&vaggpad->info);
     nvcompositor_pad->input_height = GST_VIDEO_INFO_HEIGHT (&vaggpad->info);
 
-    if (!get_nvcolorformat (&vaggpad->info, &nvcompositor_pad->comppad_pix_fmt)) {
+    if (vaggpad->info.finfo == NULL) {
+      GST_WARNING_OBJECT (vagg, "This pad is invalid");
+      continue;
+    }
+    else if (!get_nvcolorformat (&vaggpad->info, &nvcompositor_pad->comppad_pix_fmt)) {
       GST_ERROR_OBJECT (vagg, "Failed to get nvcompositorpad input NvColorFormat");
       return ret;
     }
1 Like