imagenet-console application not working properly

Hi all

I am using jetson nano module i flashed the jetson nano with jetpack 4.2 into 16GB sd-card and i downloaded the jetson-inference source code from git and build using steps provided in the link itself and build was success

the problem is i wanted to try to run helloAIworld application when i try to run
imagenet-console app it was giving segmemation fault what could be the possible mistakes i have done

adding the logs here

bonthu$:~/jetson-inference/build/aarch64/bin$ ./imagenet-console orange_0.jpg ouput_0.jpg
imagenet-console
args (3): 0 [./imagenet-console] 1 [orange_0.jpg] 2 [ouput_0.jpg]

imageNet – loading classification network model from:
– prototxt networks/googlenet.prototxt
– model networks/bvlc_googlenet.caffemodel
– class_labels networks/ilsvrc12_synset_words.txt
– input_blob ‘data’
– output_blob ‘prob’
– batch_size 2

[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
[TRT] failed to retrieve tensor for Output “prob”
[TRT] device GPU, configuring CUDA engine
[TRT] device GPU, building FP16: ON
[TRT] device GPU, building INT8: OFF
[TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
Segmentation fault (core dumped)
bonthu$:

Same Problem with tx2 and 4.2 sdkmanager …Plz help

Hi guys, this line from the output probably means that either the googlenet.prototxt or bvlc_googlenet.caffemodel files are corrupted somehow:

[TRT] failed to retrieve tensor for Output "prob"

For reference, this is what these lines should look like:

[TRT]  retrieved Output tensor "prob":  1000x1x1
[TRT]  retrieved Input tensor "data":  3x224x224

Here is what you can do to download fresh copies of this file:

$ cd jetson-inference/data/networks/
$ wget https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt -O googlenet.prototxt
$ wget http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel -O bvlc_googlenet.caffemodel

Then try running the program again. Alternatively, you can clear out your build tree and re-build, it will download all the networks again for you:

$ cd jetson-inference
$ rm -r -f build
$ mkdir build
$ cd build
$ cmake ../
$ make

I followed above method, there 's no difference in result.
I remove build folder and re-build after downloading googlenet.prototxt and bvlc_googlenet.caffemodel in /jetson-inference/data/networks

Here’s resultimagenet-console
args (3): 0 [./imagenet-console] 1 [orange_0.jpg] 2 [output_0.jpg]

imageNet – loading classification network model from:
– prototxt networks/googlenet.prototxt
– model networks/bvlc_googlenet.caffemodel
– class_labels networks/ilsvrc12_synset_words.txt
– input_blob ‘data’
– output_blob ‘prob’
– batch_size 2

[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
[TRT] failed to retrieve tensor for Output “prob”
[TRT] device GPU, configuring CUDA engine
[TRT] device GPU, building FP16: ON
[TRT] device GPU, building INT8: OFF
[TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
Segmentation fault (core dumped)

I found difference in networks folder.
Here’s list of it. I think i need to use googlenet_noprob.prototxt …but i don’t know how to use that instead of googlenet.prototxt

alexnet.prototxt bvlc_googlenet.caffemodel googlenet_noprob.prototxt
alexnet_noprob.prototxt detectnet.prototxt ilsvrc12_synset_words.txt
bvlc_alexnet.caffemodel googlenet.prototxt

I tried both googlenet.prototxt and googlenet_noprob.prototxt but it gives same result

Are you able to try it using the default SD card image?

This could help narrow down if it is something related to the SDK Manager or not.

You should use googlenet.prototxt, not googlenet_noprob.txt. I noticed both of your IP’s are from Asia, maybe it is some download issue. Now that you re-built, can you try these steps again once more? I just re-cloned and re-built from scratch here, and it works as expected.

$ cd jetson-inference/data/networks/
$ wget https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt -O googlenet.prototxt
$ wget http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel -O bvlc_googlenet.caffemodel

Thank you for yourhelp. I tried to re-built 3 times, but it gives same result.
I delete build folder and download new files in networks folder using below command

$ wget https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt -O googlenet.prototxt
$ wget http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel -O bvlc_googlenet.caffemodel

Here’s code i used to re-built

$ cd jetson-inference/build
$ cmake …/
$ make
$ sudo make install

Here’s code to run example
$ cd jetson-inference/build/aarch64/bin
$ ./imagenet-console orange_0.jpg output_0.jpg

Can you try using a fresh SD card image? Maybe there is something off about the setup of the driver packages or protobuf package. Somehow it is not able to find the layer blobs from the network model.

Newbie to this just received my jetson nano Thursday. I was having the same error followed the above instructions post #3 and its working now thanks

well, yes, box.com is blocked here in China. I am guessing quite a lot people are circumventing the issue by actually manually downloading those file and put it where they need to be. but should that be a problem?

The CMakePreBuild.sh script downloads from Box.com (Google Drive doesn’t allow automatic downloading from wget for large files), but those URLs you quote in your post are the original sources from GitHub and Berkeley website. So try running those commands first and see if you can get Googlenet working.

I had the same problem. Re-downloading using the files using the urls in #3 worked for me.

I also put up a mirror of all the models on GitHub now, available here:

[b][url]https://github.com/dusty-nv/jetson-inference/releases[/url][/b]

that is helpful. Thanks

Please,
how do I install the mirror?

I can’t really run “wget https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt -O googlenet.prototxt” properly, as I got this:

$ wget https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt -O googlenet.prototxt
–2019-12-24 18:39:43-- https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)… 151.101.228.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.228.133|:443… failed: Connection refused.

FYI, I’m from China as well, it may be the problem of our internet as well?

I’m not sure how to install from the mirrors you shared as well, could you show us the steps how to do so?

Thanks!