L4T Multimedia Reference Samples video decode segfault.

Hello,

I am trying to decode one of the video files to the raw format so that I can use it for inference on DetectNet example as per Jetson-Inference.

I just entered ./video_dec_cuda <input_file> H264 -o <output_file> and I gor Segmentation Fault. Any thoughts what could have gone wrong?

Here is the command line output.

Failed to query video capabilities: Inappropriate ioctl for device
NvMMLiteOpen : Block : BlockType = 261 
TVMR: NvMMLiteTVMRDecBlockOpen: 7907: NvMMLiteBlockOpen 
NvMMLiteBlockCreate : Block : BlockType = 261 
Failed to query video capabilities: Inappropriate ioctl for device
Starting decoder capture loop thread
Input file read complete
TVMR: NvMMLiteTVMRDecDoWork: 6768: NVMMLITE_TVMR: EOS detected
TVMR: TVMRBufferProcessing: 5723: Processing of EOS 
TVMR: TVMRBufferProcessing: 5800: Processing of EOS Done
Segmentation fault (core dumped)

I also tried the video_decode, getting segfault here as well.

Set governor to performance before enabling profiler
Failed to query video capabilities: Inappropriate ioctl for device
NvMMLiteOpen : Block : BlockType = 261 
TVMR: NvMMLiteTVMRDecBlockOpen: 7907: NvMMLiteBlockOpen 
NvMMLiteBlockCreate : Block : BlockType = 261 
Failed to query video capabilities: Inappropriate ioctl for device
Starting decoder capture loop thread
Input file read complete
TVMR: NvMMLiteTVMRDecDoWork: 6768: NVMMLITE_TVMR: EOS detected
TVMR: TVMRBufferProcessing: 5723: Processing of EOS 
TVMR: TVMRBufferProcessing: 5800: Processing of EOS Done
Segmentation fault (core dumped)

Please try

./video_dec_cuda ../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264 --input-nalu

Hi DaneLLL,

Thanks for your answer. I could run this example successfully.

I believe I am very new to video encoding/decoding pipeline and I am trying to decode an ‘.avi’ file. Would these samples be able to decode these files? I want a ‘.yuv’ raw file out of it.

Any pointers will be appreciated.

Thanks,
Bhargav

Please try

./video_dec_cuda ../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264 --input-nalu

Hi DaneLLL,

I am sorry but please refer to my second question.

It only supports raw h264 decoding in tegra_multimedia_api.

You may user gstreamer for your case. Here is the user guide:

Ok, going through the gstreamer api.

Hello,

I have had a fair understanding with the gstreamer api and pipelines. Currently, I have a few issues for which I am unable to find a proper solution.

  1. I believe that the following pipeline works (as per the documentation linked above):
    gst-launch-1.0 filesrc location=/home/nvidia/Downloads/TUD-Campus.mp4 ! qtdemux name=demux demux.video_0 ! queue ! h264parse ! omxh264dec ! nveglglessink
    

    However, when I input another .mp4 video, it does not work and shows me errors. The pipeline and corresponding output is as follows:

    GST_DEBUG=3 gst-launch-1.0 filesrc location=/home/nvidia/DJI_0021_480.MP4 ! qtdemux name=demux demux.video_0 ! queue ! h264parse ! omxh264dec ! nveglglessink
    0:00:00.062383648  8931       0x627440 WARN                     omx gstomx.c:2836:plugin_init: Failed to load configuration file: Valid key file could not be found in search dirs (searched in: /home/nvidia/.config:/etc/xdg/xdg-ubuntu:/usr/share/upstart/xdg:/etc/xdg as per GST_OMX_CONFIG_DIR environment variable, the xdg user config directory (or XDG_CONFIG_HOME) and the system config directory (or XDG_CONFIG_DIRS)
    Setting pipeline to PAUSED ...
    0:00:00.111006848  8931       0x627440 WARN                 basesrc gstbasesrc.c:3489:gst_base_src_start_complete:<filesrc0> pad not activated yet
    Pipeline is PREROLLING ...
    0:00:00.111425472  8931       0x626230 WARN                 qtdemux qtdemux.c:2651:qtdemux_parse_trex:<demux> failed to find fragment defaults for stream 1
    0:00:00.111513120  8931       0x627440 WARN               structure gststructure.c:1935:priv_gst_structure_append_to_gstring: No value transform to serialize field 'display' of type 'GstEGLDisplay'
    Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
    0:00:00.111630944  8931       0x626230 WARN                 basesrc gstbasesrc.c:2396:gst_base_src_update_length:<filesrc0> processing at or past EOS
    0:00:00.112433568  8931       0x626280 FIXME           videodecoder gstvideodecoder.c:946:gst_video_decoder_drain_out:<omxh264dec-omxh264dec0> Sub-class should implement drain()
    NvMMLiteOpen : Block : BlockType = 261 
    TVMR: NvMMLiteTVMRDecBlockOpen: 7907: NvMMLiteBlockOpen 
    NvMMLiteBlockCreate : Block : BlockType = 261 
    TVMR: TVMRBufferProcessing: 5668: video_parser_parse Unsupported Codec 
    Event_BlockError from 0BlockH264Dec : Error code - e3040
    Sending error event from 0BlockH264Dec0:00:00.117813504  8931   0x7f7c001720 ERROR                    omx gstomx.c:504:EventHandler:<omxh264dec-omxh264dec0> decode got error: Format not detected (0x80001020)
    TVMR: NvMMLiteTVMRDecDoWork: 6430: TVMR Video Dec Unsupported Stream 
    0:00:00.117890848  8931       0x626280 ERROR                    omx gstomx.c:276:gst_omx_component_handle_messages:<omxh264dec-omxh264dec0> decode got error: Format not detected (0x80001020)
    0:00:00.117919840  8931       0x626280 ERROR                    omx gstomx.c:1293:gst_omx_port_acquire_buffer:<omxh264dec-omxh264dec0> Component decode is in error state: Format not detected
    0:00:00.117958208  8931       0x626280 WARN             omxvideodec gstomxvideodec.c:3743:gst_omx_video_dec_handle_frame:<omxh264dec-omxh264dec0> error: OpenMAX component in error state Format not detected (0x80001020)
    0:00:00.117951104  8931   0x7f7c002d90 ERROR                    omx gstomx.c:1293:gst_omx_port_acquire_buffer:<omxh264dec-omxh264dec0> Component decode is in error state: Format not detected
    0:00:00.118007168  8931   0x7f7c002d90 WARN             omxvideodec gstomxvideodec.c:2822:gst_omx_video_dec_loop:<omxh264dec-omxh264dec0> error: OpenMAX component in error state Format not detected (0x80001020)
    0:00:00.118094624  8931       0x626230 WARN                 qtdemux qtdemux.c:5520:gst_qtdemux_loop:<demux> error: streaming stopped, reason error
    ERROR: from element /GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0: GStreamer encountered a general supporting library error.
    0:00:00.118153504  8931       0x626230 WARN                   queue gstqueue.c:992:gst_queue_handle_sink_event:<queue0> error: Internal data flow error.
    0:00:00.118185920  8931       0x626230 WARN                   queue gstqueue.c:992:gst_queue_handle_sink_event:<queue0> error: streaming task paused, reason error (-5)
    Additional debug info:
    /dvs/git/dirty/git-master_linux/external/gstreamer/gst-omx/omx/gstomxvideodec.c(2822): gst_omx_video_dec_loop (): /GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0:
    OpenMAX component in error state Format not detected (0x80001020)
    ERROR: pipeline doesn't want to preroll.
    Setting pipeline to NULL ...
    Caught SIGSEGV
    TVMR: TVMRBufferProcessing: 5668: video_parser_parse Unsupported Codec 
    Event_BlockError from 0BlockH264Dec : Error code - e3040
    Blocking error event from 0BlockH264DecTVMR: NvMMLiteTVMRDecDoWork: 6430: TVMR Video Dec Unsupported Stream 
    TVMR: TVMRBufferProcessing: 5668: video_parser_parse Unsupported Codec 
    TVMR: TVMRBufferProcessing: 5668: video_parser_parse Unsupported Codec 
    Event_BlockError from 0BlockH264Dec : Error code - e3040
    Blocking error event from 0BlockH264DecTVMR: NvMMLiteTVMRDecDoWork: 6430: TVMR Video Dec Unsupported Stream 
    TVMR: TVMRBufferProcessing: 5668: video_parser_parse Unsupported Codec 
    TVMR: TVMRBufferProcessing: 5668: video_parser_parse Unsupported Codec 
    Event_BlockError from 0BlockH264Dec : Error code - e3040
    Blocking error event from 0BlockH264DecTVMR: NvMMLiteTVMRDecDoWork: 6430: TVMR Video Dec Unsupported Stream 
    TVMR: TVMRBufferProcessing: 5668: video_parser_parse Unsupported Codec 
    TVMR: TVMRBufferProcessing: 5668: video_parser_parse Unsupported Codec 
    Event_BlockError from 0BlockH264Dec : Error code - e3040
    Blocking error event from 0BlockH264DecTVMR: NvMMLiteTVMRDecDoWork: 6430: TVMR Video Dec Unsupported Stream 
    TVMR: TVMRBufferProcessing: 5668: video_parser_parse Unsupported Codec 
    TVMR: TVMRBufferProcessing: 5668: video_parser_parse Unsupported Codec 
    Event_BlockError from 0BlockH264Dec : Error code - e3040
    Blocking error event from 0BlockH264DecTVMR: NvMMLiteTVMRDecDoWork: 6430: TVMR Video Dec Unsupported Stream 
    #0  0x0000007f8e260130 in pthread_join (threadid=547785761280, 
    #1  0x0000007f8e316e40 in ?? () from /lib/aarch64-linux-gnu/libglib-2.0.so.0
    #2  0x0000000000000011 in ?? ()
    Spinning.  Please run 'gdb gst-launch-1.0 8931' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.
    
  2. Another question is related to the appsink. When I use the gst-launch with autovideosink/nveglglessink I am able to view the contents properly. However, when I use the same pipeline in the jetson inference code, the output video is corrupted. following two are the links to respective screenshots.

    https://photos.app.goo.gl/uxuCgXaZBllayeYD2

    https://photos.app.goo.gl/19tgkDH7YjoFdXHx1

    The pipeline in the code is:

    filesrc location=/home/nvidia/Downloads/TUD-Campus.mp4 ! qtdemux name=demux demux.video_0 ! queue ! h264parse ! omxh264dec ! appsink
    

    which is then passed to gst_parse_launch().

Hi bhargavK, it looks like the video file is invalid.
TVMR: TVMRBufferProcessing: 5668: video_parser_parse Unsupported Codec

Can you try other video files?

Hello DaneLLL,

Thanks for your reply. As mentioned, I am able to test another MP4 video file.

Now that you mention it, I want to believe that it is indeed corrupted as I am not able to play it with the vlc player. It could be because I had created this video after processing a file in opencv on my Mac. I will give another shot at creating another file on Jetson itself.

Btw, any pointers/idea on the second question? Maybe dusty_nv can have some pointers? I am unsure if I can tag people in this forum.

Thanks,
Bhargav

Here is a post about decoding + appsink:
https://devtalk.nvidia.com/default/topic/1011376/jetson-tx1/gstreamer-decode-live-video-stream-with-the-delay-difference-between-gst-launch-1-0-command-and-appsink-callback/post/5160929/#5160929

Thanks! I am successfully able to test the appsink code you had provided.

However, I noticed one thing. When I run the code, I get this output:

Using launch string: filesrc location=../P1220016.MOV ! decodebin ! nvvidconv ! video/x-raw, format=I420, width=1920, height=1080 ! appsink name=mysink 
NvMMLiteOpen : Block : BlockType = 261 
TVMR: NvMMLiteTVMRDecBlockOpen: 7907: NvMMLiteBlockOpen 
NvMMLiteBlockCreate : Block : BlockType = 261 
TVMR: cbBeginSequence: 1223: BeginSequence  3840x2160, bVPR = 0
TVMR: LowCorner Frequency = 345000 
TVMR: cbBeginSequence: 1622: DecodeBuffers = 6, pnvsi->eCodec = 4, codec = 0 
TVMR: cbBeginSequence: 1693: Display Resolution : (3840x2160) 
TVMR: cbBeginSequence: 1694: Display Aspect Ratio : (3840x2160) 
TVMR: cbBeginSequence: 1762: ColorFormat : 5 
TVMR: cbBeginSequence:1767 ColorSpace = NvColorSpace_YCbCr709_ER
TVMR: cbBeginSequence: 1904: SurfaceLayout = 3
TVMR: cbBeginSequence: 2005: NumOfSurfaces = 13, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
TVMR: cbBeginSequence: 2007: BeginSequence  ColorPrimaries = 1, TransferCharacteristics = 1, MatrixCoefficients = 1
Allocating new output: 3840x2160 (x 13), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3464: Send OMX_EventPortSettingsChanged : nFrameWidth = 3840, nFrameHeight = 2160 
TVMR: FrameRate = 29 
TVMR: NVDEC LowCorner Freq = (333500 * 1024) 
TVMR: FrameRate = 29.970090 
TVMR: FrameRate = 29.970090 
TVMR: FrameRate = 29.970090 
TVMR: FrameRate = 29.970090 
TVMR: FrameRate = 29.970090 
TVMR: NvMMLiteTVMRDecDoWork: 6768: NVMMLITE_TVMR: EOS detected
TVMR: TVMRBufferProcessing: 5723: Processing of EOS 
TVMR: TVMRBufferProcessing: 5800: Processing of EOS Done
app sink receive eos
TVMR: TVMRFrameStatusReporting: 6369: Closing TVMR Frame Status Thread -------------
TVMR: TVMRVPRFloorSizeSettingThread: 6179: Closing TVMRVPRFloorSizeSettingThread -------------
TVMR: TVMRFrameDelivery: 6219: Closing TVMR Frame Delivery Thread -------------
TVMR: NvMMLiteTVMRDecBlockClose: 8105: Done 
going to exit, decode 645 frames in 22 seconds

The point to note is that even though the width and height are provided in the pipeline string, the video is still read with the original resolution. Whereas, in the output you have posted at:
https://devtalk.nvidia.com/default/topic/1011376/jetson-tx1/gstreamer-decode-live-video-stream-with-the-delay-difference-between-gst-launch-1-0-command-and-appsink-callback/post/5160929/#5160929
your video reads in as the resolution you provide in your pipeline. (Which I think that it is because the video you tested on is the same size as the parameters you input)

My understanding is that ‘nvvidconv’ element has capability of scaling.

Am I missing something in the pipeline?

Thanks for all the help.

Hi bhargavK,
In your pipeline, you should get 1920x1080 YUVs in appsink. No?

I would like to think that I am not getting it as per:

Allocating new output: 3840x2160 (x 13), ThumbnailMode = 0

Is there any other way to check for your test code (attached here)?

Moreover, with this pipeline too, I am getting similar results in appsink for the inference code I have. I merely modified the launch string in the inference code.

ss << "filesrc location=/home/nvidia/test.mp4"
   << " ! decodebin"
   << " ! nvvidconv ! video/x-raw, format=(string)I420"
   << ", width=(int)" << mWidth << ", height=(int)" << mHeight
   << " ! appsink name=mysink";

gstreamerPipeline.cpp (3.06 KB)
inferenceFrame.png

Hi bhargavK,
You can check the frame size. For 1920x1080 I420, it is 1920x1080x1.5 bytes.

Is your inference code running on GPU or CPU? The buffer you get via appsink is on CPU. The performance can be worse than processing on GPU.

The inference code runs on GPU. I will keep changing the pipeline and update if I find anything weird.

For your reference, you may consider using nvivafilter.
https://devtalk.nvidia.com/default/topic/963123/jetson-tx1/video-mapping-on-jetson-tx1/post/4979740/#4979740

Hi Dane,

Thanks for all your support. After some ‘discovery and inspection’ (using gst-discoverer/inspect!) of different files I have, I have come to the conclusion that there is nothing wrong with my pipeline (again, thanks to you for your suggestions too).

The reason for believing so is because the following pipeline works fine:

ss << "filesrc location=/home/nvidia/DJI_0021_480.avi"
   << " ! decodebin ! videoscale ! video/x-raw, format=(string)RGB, "
   << "width=(int)" << mWidth << ", height=(int)" << mHeight
   << " ! videoconvert ! appsink name=mysink";

and it works because the videoconvert can produce ‘RGB’ format. And, the inference code calls a function to convert RGB to RGBA.

Now, with ‘nvvidconv’, I have an advantage of getting the RGBA frame from the pipeline itself and maybe don’t need to pass it through the function that converts RGB to RGBA.

This is my initial intuition and will hope that this works.

If there are any good sources to read about different types of format, please feel free to suggest. As always, you have been great!

Thanks,
Bhargav

Hello Dane,

My initial intuition was correct. With some more inspection of the inference code I am using, I found an implementation of conversion from NV12 to RGBA.

Thus, I changed my pipeline to the following (and it works as expected!):

ss << "filesrc location=/home/nvidia/video_data/P1220016.MOV"
<< " ! decodebin"
<< " ! nvvidconv ! video/x-raw, format=(string)NV12"
<< ", width=(int)" << mWidth << ", height=(int)" << mHeight
<< " ! appsink name=mysink";

My follow up questions to you are:

  1. Is any further optimization possible? I tried using nvivafilter as you suggested earlier. But when I test the pipeline using gst-launch, I get some corrupted output as shown in the attachment (corrupt.png).

    The pipeline:

    gst-launch-1.0 filesrc location="video_data/P1220016.MOV" ! decodebin ! nvivafilter customer-lib-name=libnvsample_cudaprocess.so cuda-process=true post-process=true ! "video/x-raw(memory:NVMM), format=(string)NV12, width=(int)640, height=(int)360" ! nvegltransform ! nveglglessink
    
    1. In general, what is the difference between video/x-raw(memory:NVMM) and just video/x-raw? The former one stores memory on the device and the later on the host?

    2. Related to question 2, when I try to use video/x-raw(memory:NVMM) in the appsink pipeline above, I am getting a black image. What could be the issue, if you may comment? I am using the following pipeline as a reference:

    ss << "nvcamerasrc fpsRange=\"30.0 30.0\""
    << " ! video/x-raw(memory:NVMM), width=(int)" << mWidth << ", height=(int)" << mHeight 
    << ", format=(string)NV12 ! nvvidconv flip-method=" << flipMethod << " ! "; 
    << "video/x-raw ! appsink name=mysink";
    

    Can I use something similar to this? Maybe:

    ss << "filesrc location=/home/nvidia/video_data/P1220016.MOV"
    << " ! decodebin"
    << " ! nvivafilter customer-lib-name=libnvsample_cudaprocess.so cuda-process=true post-process=true"
    << " ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)NV12"
    << ", width=(int)" << mWidth << ", height=(int)" << mHeight
    << " ! nvvidconv ! video/x-raw, format=(string)NV12"
    << " ! appsink name=mysink";
    

corupted.png

Hi bhargavK

It is expected because of ‘cuda-process=true post-process=true’
You may check the source https://devtalk.nvidia.com/default/topic/963123/jetson-tx1/video-mapping-on-jetson-tx1/post/4980565/#4980565

video/x-raw(memory:NVMM) is DMA buffer and video/x-raw is CPU buffer

In appsink, you must get CPU buffer as #11 suggests.