Run the "jetson-inference" ERROR "Segmentation fault (core dumped)"

I followed the example “Classifying Images with ImageNet” at jetson-inference/imagenet-console-2.md at master · dusty-nv/jetson-inference · GitHub
and i get this error:
$ ./imagenet-console orange_0.jpg output_0.JPG
imagenet-console
args (3): 0 [./imagenet-console] 1 [orange_0.jpg] 2 [output_0.JPG]

imageNet – loading classification network model from:
– prototxt networks/googlenet.prototxt
– model networks/bvlc_googlenet.caffemodel
– class_labels networks/ilsvrc12_synset_words.txt
– input_blob ‘data’
– output_blob ‘prob’
– batch_size 2

[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
[TRT] failed to retrieve tensor for Output “prob”
[TRT] device GPU, configuring CUDA engine
[TRT] device GPU, building FP16: ON
[TRT] device GPU, building INT8: OFF
[TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
Segmentation fault (core dumped)

Where is wrong?

Hi,

One possible issue of segmentation fault is out of memory.
Could you monitor the system with tegrastats at the same time and share the log with us?

sudo tegrastats

Thanks.

Hi AastaLL, this is the log:

RAM 1081/3957MB (lfb 517x4MB) SWAP 0/8192MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [57%@518,20%@518,32%@403,20%@614] EMC_FREQ 1%@1600 GR3D_FREQ 40%@76 APE 25 PLL@33.5C CPU@36C PMIC@100C GPU@35C AO@43C thermal@35.75C POM_5V_IN 2475/2035 POM_5V_GPU 121/90 POM_5V_CPU 486/334
RAM 1081/3957MB (lfb 517x4MB) SWAP 0/8192MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [31%@825,19%@825,14%@825,17%@825] EMC_FREQ 1%@1600 GR3D_FREQ 93%@76 APE 25 PLL@33.5C CPU@36C PMIC@100C GPU@35C AO@43C thermal@35.25C POM_5V_IN 2556/2139 POM_5V_GPU 162/105 POM_5V_CPU 486/365
RAM 1082/3957MB (lfb 517x4MB) SWAP 0/8192MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [70%@518,11%@518,16%@518,11%@518] EMC_FREQ 3%@1600 GR3D_FREQ 9%@307 APE 25 PLL@34C CPU@36C PMIC@100C GPU@35C AO@43.5C thermal@35.75C POM_5V_IN 2678/2229 POM_5V_GPU 162/114 POM_5V_CPU 526/391
RAM 1082/3957MB (lfb 517x4MB) SWAP 0/8192MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [44%@403,17%@403,8%@403,15%@403] EMC_FREQ 25%@204 GR3D_FREQ 10%@76 APE 25 PLL@33.5C CPU@36C PMIC@100C GPU@35C AO@43.5C thermal@35.5C POM_5V_IN 2079/2207 POM_5V_GPU 81/109 POM_5V_CPU 285/376
RAM 1108/3957MB (lfb 517x4MB) SWAP 0/8192MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [24%@1428,9%@1428,9%@1428,10%@1428] EMC_FREQ 3%@1600 GR3D_FREQ 31%@76 APE 25 PLL@34C CPU@36.5C PMIC@100C GPU@35C AO@43.5C thermal@35.25C POM_5V_IN 3024/2309 POM_5V_GPU 80/106 POM_5V_CPU 1008/455
RAM 1092/3957MB (lfb 515x4MB) SWAP 0/8192MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [72%@1428,6%@1428,28%@1428,13%@1428] EMC_FREQ 3%@1600 GR3D_FREQ 27%@76 APE 25 PLL@34.5C CPU@37.5C PMIC@100C GPU@35C AO@43.5C thermal@36.25C POM_5V_IN 3220/2411 POM_5V_GPU 80/103 POM_5V_CPU 1127/530
RAM 1087/3957MB (lfb 515x4MB) SWAP 0/8192MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [5%@102,2%@102,43%@102,32%@102] EMC_FREQ 25%@204 GR3D_FREQ 0%@76 APE 25 PLL@33.5C CPU@36C PMIC@100C GPU@34.5C AO@43C thermal@36.25C POM_5V_IN 1431/2313 POM_5V_GPU 40/96 POM_5V_CPU 122/489
RAM 1087/3957MB (lfb 515x4MB) SWAP 0/8192MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [14%@204,12%@204,8%@204,11%@204] EMC_FREQ 20%@204 GR3D_FREQ 0%@76 APE 25 PLL@33.5C CPU@36C PMIC@100C GPU@35C AO@43C thermal@35.5C POM_5V_IN 1472/2236 POM_5V_GPU 40/91 POM_5V_CPU 163/459
RAM 1087/3957MB (lfb 515x4MB) SWAP 0/8192MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [19%@307,18%@307,13%@307,16%@307] EMC_FREQ 20%@204 GR3D_FREQ 5%@76 APE 25 PLL@33C CPU@36C PMIC@100C GPU@34.5C AO@43C thermal@35.5C POM_5V_IN 1551/2179 POM_5V_GPU 40/87 POM_5V_CPU 244/441
RAM 1087/3957MB (lfb 515x4MB) SWAP 0/8192MB (cached 0MB) IRAM 0/252kB(lfb 252kB) CPU [15%@102,6%@102,5%@102,3%@102] EMC_FREQ 15%@204 GR3D_FREQ 0%@76 APE 25 PLL@33C CPU@36C PMIC@100C GPU@35C AO@43C thermal@35.5C POM_5V_IN 1390/2118 POM_5V_GPU 40/83 POM_5V_CPU 122/417

Thanks.

It seems not out of memory.
Can anyone help me?

Hi,

Checking the error log again, it looks like your model is somehow corrupted.

[TRT] failed to retrieve tensor for Output "prob"

Could you re-download the model and try it again?

$ cd jetson-inference/data/networks
$ wget http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel

Here is a similar issue for your reference:
https://devtalk.nvidia.com/default/topic/1023340/jetson-tx2/segmentation-fault-when-imagenet-console-orange_0-jpg-output_0-jpg-on-tx2/

Thanks.

Thanks for you reply.

Having the same issue. Downloading the networks and rebuilding the project does not help. Did anybody find a solution? Thanks!

Found a solution to my issue - our company internet blocked some of the web pages which were required for the program to run (unable to establish SSL connections errors, etc.). The errors can be found in terminal and occurred during the cmake or make procedure. If you are facing similar issues, take the NVIDIA device home where no internet protection mechanisms are, remove whole GitHub project and run the download, make and install procedure again. The demo program should work :)