SIGSEV when trying to run deepstream-app and second pipeline process

I am attempting to run a command line pipeline while deepstream-app is running. My pipeline with the deepstream app uses file source, inference element, osd, hardware encoder, and file sink.

The second pipeline and output is included below.

This command is being run as another process while the main deepstream process is also running.

gst-launch-1.0     multifilesrc location=/mnt/media/images/tmp/%08d.jpg caps=image/jpeg     ! nvjpegdec     ! nvvidconv     ! 'video/x-raw(memory:NVMM),format=(string)I420,framerate=(fraction)20/1,width=720,height=480'     ! nvv4l2h264enc     ! h264parse     ! mp4mux     ! filesink location=/mnt/media/videos/timelapse/2021-04-22T10:56:45.mp4
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
Pipeline is PREROLLING ...
Redistribute latency...
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
Caught SIGSEGV
#0  0x0000007fb36dae28 in __GI___poll (fds=0x55b1e8dce0, nfds=548472259128, timeout=<optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:41
#1  0x0000007fb37e7e08 in  () at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0
#2  0x00000055b1d35ac0 in  ()
Spinning.  Please run 'gdb gst-launch-1.0 14957' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.

Any suggestion on how to debug this would be greatly appreciated.

The following is printed to syslog when the second command is run:

Apr 22 11:13:22 jetson1 kernel: [79542.386486] nvmap_alloc_handle: PID 17754: gst-launch-1.0: WARNING: All NvMap Allocations must have a tag to identify the subsystem allocating memory.Please pass the tag to the API call NvRmMemHanldeAllocAttr() or relevant. 

Here I try the command without the hardware encoder on the second pipeline:

gst-launch-1.0     multifilesrc location=/mnt/media/images/tmp/%08d.jpg caps=image/jpeg     ! nvjpegdec     ! x264enc     ! h264parse     ! mp4mux     ! filesink location=/mnt/media/videos/timelapse/2021-04-22T11:21:46.mp4
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Redistribute latency...
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
Caught SIGSEGV
#0  0x0000007f85e4de28 in __GI___poll (fds=0x55b27ec220, nfds=547708318264, timeout=<optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:41
#1  0x0000007f85f5ae08 in  () at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0
#2  0x00000055b26a7ac0 in  ()
Spinning.  Please run 'gdb gst-launch-1.0 19204' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.

This is the pipeline with omxh264enc instead:

gst-launch-1.0     multifilesrc location=/mnt/media/images/tmp/%08d.jpg caps=image/jpeg     ! nvjpegdec     ! omxh264enc     ! qtmux     ! filesink location=/mnt/media/videos/timelapse/2021-04-22T11:24:19.mp4
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
H264: Profile = 66, Level = 40 
NvMMLiteVideoEncDoWork: Surface resolution (0 x 0) smaller than encode resolution (720 x 480)
VENC: NvMMLiteVideoEncDoWork: 4283: BlockSide error 0x4
Event_BlockError from 0BlockAvcEnc : Error code - 4
Sending error event from 0BlockAvcEncERROR: from element /GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0: GStreamer encountered a general supporting library error.
Additional debug info:
/dvs/git/dirty/git-master_linux/3rdparty/gst/gst-omx/omx/gstomxvideoenc.c(1331): gst_omx_video_enc_loop (): /GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0:
OpenMAX component in error state Bad parameter (0x80001005)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Caught SIGSEGV
#0  0x0000007f7d664d5c in __waitpid (pid=<optimized out>, stat_loc=0x7ff4287a84, options=<optimized out>) at ../sysdeps/unix/sysv/linux/waitpid.c:30
#1  0x0000007f7d6a02a0 in g_on_error_stack_trace ()
#2  0x0000005565ab7c3c in fault_spin () at gst-launch.c:103
#3  0x0000005565ab7c3c in fault_handler_sighandler (signum=1705739560)
#4  0x0000007f7d96d6c0 in <signal handler called> ()
#5  0x0000007f78cd2ee8 in  ()
#6  0x0000007f78cd4a54 in  ()
#7  0x0000007f78cd4b74 in  ()
#8  0x0000007f78ceb424 in  ()
#9  0x0000007f7d84fef0 in gst_element_change_state (element=element@entry=0x557571a990, transition=transition@entry=GST_STATE_CHANGE_PAUSED_TO_READY)
#10 0x0000007f7d850644 in gst_element_set_state_func (element=0x557571a990, state=<optimized out>) at gstelement.c:2906
#11 0x0000007f7d828aa4 in gst_bin_element_set_state (next=GST_STATE_NULL, current=5, start_time=1, base_time=10425750327789905408, element=0x557571a990, bin=0x5575732080) at gstbin.c:2604
#12 0x0000007f7d828aa4 in gst_bin_change_state_func (element=0x5575732080, transition=4096299552) at gstbin.c:2946
#13 0x0000007f7d879f60 in gst_pipeline_change_state (element=0x5575732080, transition=GST_STATE_CHANGE_READY_TO_NULL) at gstpipeline.c:508
#14 0x0000007f7d84fef0 in gst_element_change_state (element=element@entry=0x5575732080, transition=transition@entry=GST_STATE_CHANGE_READY_TO_NULL)
#15 0x0000007f7d850644 in gst_element_set_state_func (element=0x5575732080, state=<optimized out>) at gstelement.c:2906
#16 0x0000005565ab5b80 in main (argc=<optimized out>, argv=<optimized out>)
Spinning.  Please run 'gdb gst-launch-1.0 19730' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Its Jetson Nano with Jetpack 32.4, deepstream 5.01.

Will try to repro your issue, at mean time, could you please try it with Jetpack 4.5.1 to see if the issue there?

Hi,
Internal bug reported, we will look into this issue. thanks.

I have tried with Jetpack 4.5.1 and Deepstream 5.1, with the same result.

I put this at line 642 of deepstream_app_main.c :

system("gst-launch-1.0 \
    multifilesrc location=img/%08d.jpg caps=image/jpeg \
    ! nvjpegdec \
    ! nvvidconv \
    ! 'video/x-raw(memory:NVMM),format=(string)I420,framerate=(fraction)20/1,width=720,height=480' \
    ! nvv4l2h264enc \
    ! h264parse \
    ! mp4mux \
    ! filesink location=timelapse1.mp4");

The result is the same regardless of deepstream or jetpack version.

@amycao thank you for your help with this.

Hi @mattcarp88 ,
Can you use below command ?

gst-launch-1.0  multifilesrc location=~/out/%08d.jpg caps=image/jpeg  !  nvjpegdec   ! nvvideoconvert   ! 'video/x-raw(memory:NVMM),format=(string)I420,framerate=(fraction)30/1,width=720,height=480'     ! nvv4l2h264enc     ! h264parse     ! mp4mux     ! filesink location=~/generate.mp4

We verified it works on our side

Hello @mchi , did you test the command as a subprocess of deepstream-app? This is when I experiece the above behavior.

Hello @mchi,

Are you able to reporoduce the error when running the command as subprocess?

yes, we can reproduce it.
Seems the gst command called by system() is not get running up.
May I know why you want to run it by this way?

I would like two separate pipelines running simultaneously in my application

Btw I constructed the same pipeline in C code and tried running simultaneously with the main pipeline and get the same error

Deepstream application is expected to run with DS components, you can use nvvideoconvert which is recommended DS component as supporting NvBufSurface output in nvvidconv is not planned.

I dont see how your response relates to this issue, can you clarify? I am using the nvvidconv element.

nvvidconv has no support to output NvBufSurface which is expected by nvv4l2h264enc when its run in deepstream application where DS_NEW_BUFAPI flags gets set. So this results into segfault.
So if you replace the pipeline in application with
system("gst-launch-1.0 videotestsrc ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=I420,framerate=1/1,width=720,height=480’ ! identity silent=0 ! nvv4l2h264enc ! fakesink -v ");
where there is no nvjpegdec, this will also result into a segfault as nvv4l2h264enc is expecting NvBufSurface from nvvidconv

segfault can also be seen with gst-launch pipeline when DS_NEW_BUFAPI=1 is set.
export DS_NEW_BUFAPI=1
gst-launch-1.0 videotestsrc ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=I420,framerate=1/1,width=720,height=480’ ! identity silent=0 ! nvv4l2h264enc ! fakesink -v
so that’s why suggested as comment 20

@amycao Thank you for your response. I am still not understanding. Is there a problem with my pipeline? It works fine outside as a gst-launch command correct? Is there a way to run the same pipeline in parallel within deepstream-app ?