Issues when test video file with detectnet-camera

I was using follow command on console, and it works.

gst-launch-1.0 -v filesrc location=test2.mp4 ! qtdemux name=demux demux.video_0 ! h264parse ! omxh264dec ! video/x-raw, width=960, height=540 ! nveglglessink -e

However, after I modifed gstCamera.cpp to

ss << "filesrc location=test2.mp4 ! qtdemux name=demux demux.video_0 ! h264parse ! omxh264dec ! ";
ss << "video/x-raw, width=960, height=540, format=NV12 ! appsink name=mysink";

It gives me segmentation fault

nvidia@tegra-ubuntu:~/jetson-inference/build/aarch64/bin$ ./detectnet-camera 
detectnet-camera
  args (1):  0 [./detectnet-camera]  

[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstreamer decoder pipeline string:
filesrc location=test2.mp4 ! qtdemux name=demux demux.video_0 ! h264parse ! omxh264dec ! video/x-raw, width=960, height=540, format=NV12 ! appsink name=mysink
buffer done!!!init done!!!!!
detectnet-camera:  successfully initialized video device
    width:  960
   height:  540
    depth:  12 (bpp)


detectNet -- loading detection network model from:
          -- prototxt    networks/ped-100/deploy.prototxt
          -- model       networks/ped-100/snapshot_iter_70800.caffemodel
          -- input_blob  'data'
          -- output_cvg  'coverage'
          -- output_bbox 'bboxes'
          -- mean_pixel  0.000000
          -- threshold   0.500000
          -- batch_size  2

[GIE]  TensorRT version 2.1, build 2102
[GIE]  attempting to open cache file networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache
[GIE]  loading network profile from cache... networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache
[GIE]  platform has FP16 support.
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel loaded
[GIE]  CUDA engine context initialized with 3 bindings
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel input  binding index:  0
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel input  dims (b=2 c=3 h=512 w=1024) size=12582912
[cuda]  cudaAllocMapped 12582912 bytes, CPU 0x102a00000 GPU 0x102a00000
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage  binding index:  1
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage  dims (b=2 c=1 h=32 w=64) size=16384
[cuda]  cudaAllocMapped 16384 bytes, CPU 0x103600000 GPU 0x103600000
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes  binding index:  2
[GIE]  networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes  dims (b=2 c=4 h=32 w=64) size=65536
[cuda]  cudaAllocMapped 65536 bytes, CPU 0x103800000 GPU 0x103800000
networks/ped-100/snapshot_iter_70800.caffemodel initialized.
[cuda]  cudaAllocMapped 16 bytes, CPU 0x103a00000 GPU 0x103a00000
maximum bounding boxes:  8192
[cuda]  cudaAllocMapped 131072 bytes, CPU 0x103c00000 GPU 0x103c00000
[cuda]  cudaAllocMapped 32768 bytes, CPU 0x103810000 GPU 0x103810000
default X screen 0:   1360 x 768
[OpenGL]  glDisplay display window initialized
[OpenGL]   creating 960x540 texture
loaded image  fontmapA.png  (256 x 512)  2097152 bytes
[cuda]  cudaAllocMapped 2097152 bytes, CPU 0x103e00000 GPU 0x103e00000
[cuda]  cudaAllocMapped 8192 bytes, CPU 0x103604000 GPU 0x103604000
ready to camera streaming!!!![gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse0
[gstreamer] gstreamer changed state from NULL to READY ==> demux
[gstreamer] gstreamer changed state from NULL to READY ==> filesrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer changed state from READY to PAUSED ==> omxh264dec-omxh264dec0
[gstreamer] gstreamer changed state from READY to PAUSED ==> h264parse0
[gstreamer] gstreamer stream status CREATE ==> sink
[gstreamer] gstreamer changed state from READY to PAUSED ==> demux
[gstreamer] gstreamer changed state from READY to PAUSED ==> filesrc0
[gstreamer] gstreamer stream status ENTER ==> sink
NvMMLiteOpen : Block : BlockType = 261 
TVMR: NvMMLiteTVMRDecBlockOpen: 7907: NvMMLiteBlockOpen 
NvMMLiteBlockCreate : Block : BlockType = 261 
TVMR: cbBeginSequence: 1223: BeginSequence  960x544, bVPR = 0
TVMR: LowCorner Frequency = 100000 
TVMR: cbBeginSequence: 1622: DecodeBuffers = 5, pnvsi->eCodec = 4, codec = 0 
TVMR: cbBeginSequence: 1693: Display Resolution : (960x540) 
TVMR: cbBeginSequence: 1694: Display Aspect Ratio : (960x540) 
TVMR: cbBeginSequence: 1762: ColorFormat : 5 
TVMR: cbBeginSequence:1776 ColorSpace = NvColorSpace_YCbCr601
TVMR: cbBeginSequence: 1904: SurfaceLayout = 3
TVMR: cbBeginSequence: 2005: NumOfSurfaces = 12, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
TVMR: cbBeginSequence: 2007: BeginSequence  ColorPrimaries = 2, TransferCharacteristics = 2, MatrixCoefficients = 2
Allocating new output: 960x544 (x 12), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3464: Send OMX_EventPortSettingsChanged : nFrameWidth = 960, nFrameHeight = 544 
[gstreamer] gstreamer decoder onPreroll
[gstreamer] gstreamer decoder onBuffer
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x104000000 GPU 0x104000000
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x1040bde00 GPU 0x1040bde00
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x104200000 GPU 0x104200000
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x1042bde00 GPU 0x1042bde00
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x104400000 GPU 0x104400000
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x1044bde00 GPU 0x1044bde00
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x104600000 GPU 0x104600000
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x1046bde00 GPU 0x1046bde00
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x104800000 GPU 0x104800000
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x1048bde00 GPU 0x1048bde00
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x104a00000 GPU 0x104a00000
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x104abde00 GPU 0x104abde00
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x104c00000 GPU 0x104c00000
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x104cbde00 GPU 0x104cbde00
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x104e00000 GPU 0x104e00000
[cuda]  cudaAllocMapped 777600 bytes, CPU 0x104ebde00 GPU 0x104ebde00
[cuda]   gstreamer camera -- allocated 16 ringbuffers, 777600 bytes each
[gstreamer] gstreamer msg stream-start ==> pipeline0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg duration-changed ==> h264parse0
[gstreamer] gstreamer mysink missing gst_tag_list_to_string()
[gstreamer] gstreamer mysink missing gst_tag_list_to_string()
Segmentation fault (core dumped)

Debug information below:

Thread 9 "omxh264dec-omxh" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f8b7ff150 (LWP 9505)]
malloc_printerr (action=3, str=0x7fb7cb9770 "free(): invalid pointer", 
    ptr=0x7fb7f1cdb8, ar_ptr=0x43032013d9bff) at malloc.c:4993
4993	malloc.c: No such file or directory.
(gdb) bt
#0  malloc_printerr (action=3, str=0x7fb7cb9770 "free(): invalid pointer", 
    ptr=0x7fb7f1cdb8, ar_ptr=0x43032013d9bff) at malloc.c:4993
#1  0x0000007fb7c0f088 in _int_free (av=0x43032013d9bff, p=<optimized out>, 
    have_lock=0) at malloc.c:3867
#2  0x0000007fb7ec209c in gst_message_print(_GstBus*, _GstMessage*, void*) ()
   from /home/nvidia/jetson-inference/build/aarch64/lib/libjetson-inference.so
#3  0x0000007fb7ec188c in gstCamera::checkMsgBus() ()
   from /home/nvidia/jetson-inference/build/aarch64/lib/libjetson-inference.so
#4  0x0000007fb7ec0d04 in gstCamera::onBuffer(_GstAppSink*, void*) ()
   from /home/nvidia/jetson-inference/build/aarch64/lib/libjetson-inference.so
#5  0x0000007fb32e246c in ?? ()
   from /usr/lib/aarch64-linux-gnu/libgstapp-1.0.so.0
#6  0x000000000086be20 in ?? ()
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(gdb)

Any help will be greatly appreciated

It needs ‘nvvidconv’ to feed video/x-raw into appsink. Please try

ss << "filesrc location=test2.mp4 ! qtdemux name=demux demux.video_0 ! h264parse ! omxh264dec ! ";
ss << "nvvidconv ! video/x-raw,format=NV12 ! appsink name=mysink";

I tried to add “nvvidconv” to launch string, but get no lucky, it gives the same error. Below is the properties of the test file.

Dimensions: 960*540
Codec : H.264(High Profile)
Framerate : 24
Bitrate : 1050kbps
Pixel_format: NV12

Is the launch string wrong?

Hi,
Does onboardCamera() return true or false? You should need onboardCamera()=true.

Thanks for your reply, onboardCamera() remains true, since I only change launch string part and DefaultHeight and DefaultWidth to 960 and 540

I also tried to take “video/x-raw, format=NV12” out. On the console, it works fine, but it doesn’t work in gstCamera.cpp

@DaneLLL
I tried the method from this thread [url]https://github.com/dusty-nv/jetson-inference/issues/104[/url]
and it works, I just curious, why that one works, but the previous one caused segmentation fault.

The new one request the raw data which consume too much space, so I still prefer the method which can process the H264 file.

Any suggestion?

Thanks

From the callstack, it looks like the segfault is occurring inside gst_message_print() function. Can you try commenting out this line, rebuilding, and let me know if the error changes?

[url]https://github.com/dusty-nv/jetson-inference/blob/e12e6e64365fed83e255800382e593bf7e1b1b1a/util/camera/gstUtility.cpp#L226[/url]

After I comment out this line, the previous launch string start working, thanks so much

Thanks for confirming, I’ve commited a patch to master that should fix this upstream.