Jetson Inference on Xavier Core Dumped (Solved)

hi, there. When I’m testing the jestson inference package downloaded from github. I got all the core dumped from the compiled binaries. the output is like:
when testing imagenet-camera:

imagenet-camera
args (1): 0 [./imagenet-camera]

[gstreamer] initialized gstreamer, version 1.14.1.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVCAMERA
[gstreamer] gstCamera pipeline string:
nvcamerasrc fpsRange=“30.0 30.0” ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera failed to create pipeline
[gstreamer] (no element “nvcamerasrc”)
[gstreamer] failed to init gstCamera (GST_SOURCE_NVCAMERA)
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS
[gstreamer] gstCamera pipeline string:
nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_NVARGUS

imagenet-camera: successfully initialized video device
width: 1280
height: 720
depth: 12 (bpp)

imageNet – loading classification network model from:
– prototxt networks/googlenet.prototxt
– model networks/bvlc_googlenet.caffemodel
– class_labels networks/ilsvrc12_synset_words.txt
– input_blob ‘data’
– output_blob ‘prob’
– batch_size 2

[TRT] TensorRT version 5.0.0
[TRT] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache
[TRT] loading network profile from cache… networks/bvlc_googlenet.caffemodel.2.tensorcache
[TRT] platform has FP16 support.
[TRT] networks/bvlc_googlenet.caffemodel loaded
[TRT] CUDA engine context initialized with 2 bindings
[TRT] networks/bvlc_googlenet.caffemodel input binding index: 0
[TRT] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224
[cuda] cudaAllocMapped 1204224 bytes, CPU 0x21bdcb000 GPU 0x21bdcb000
[TRT] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1
[TRT] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000
[cuda] cudaAllocMapped 8000 bytes, CPU 0x21bfcb000 GPU 0x21bfcb000
networks/bvlc_googlenet.caffemodel initialized.
[TRT] networks/bvlc_googlenet.caffemodel loaded
imageNet – loaded 1000 class info entries
networks/bvlc_googlenet.caffemodel initialized.
default X screen 0: 1920 x 1080
[OpenGL] glDisplay display window initialized
[OpenGL] creating 1280x720 texture
loaded image fontmapA.png (256 x 512) 2097152 bytes
[cuda] cudaAllocMapped 2097152 bytes, CPU 0x21c1cb000 GPU 0x21c1cb000
[cuda] cudaAllocMapped 8192 bytes, CPU 0x21bfcd000 GPU 0x21bfcd000
[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter2
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv1
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline1
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter2
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv1
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline1
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg new-clock ==> pipeline1
[gstreamer] gstreamer msg stream-start ==> pipeline1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0

imagenet-camera: camera open for streaming
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :

Segmentation fault (core dumped)

looked like something wrong with the gstreamer?

Thanks.

Best

Hi senosy, do you have the camera module from TX1/TX2 devkit plugged in? Do you get video from the nvgstcapture application? Is there relevant output from dmesg kernel log?

From this empty message just before the segfault, it looks like it wasn’t able to enumerate the sensor modes, so the camera may not be being detected properly:

GST_ARGUS: Available Sensor modes :

Normally it would list the available camera resolutions and framerates there.

it still turns out the gst-camera thing. I can not capture the camera video stream using gst related. I’ll check with the code.

Which camera are you using?

logitech C920 HD
it looks like gst dosn’t like it.

To enable V4L2 USB webcam, see the note about the DEFAULT_CAMERA define in imagenet-camera.cpp from this section of the tutorial: [url]https://github.com/dusty-nv/jetson-inference#running-the-live-camera-recognition-demo[/url]

You will also need to do this patch for C920 YUY2 format on Xavier, and recompile.

That works. Thanks!

Hi GeForceX, can you install “v4l-utils” package and post the results of running this command?

$ v4l2-ctl -d /dev/video0 --list-formats-ext

Hmm the YUYV formats appear the same as here with the C920, so I think that YUY2 patch should work…

Can you try running a GStreamer pipeline directly similar to this one? [url]https://github.com/dusty-nv/jetson-inference/issues/267#issuecomment-422183610[/url]

OK good to hear, thanks for letting us know!