Hi I am following the steps posted here:
the build was smooth and error free, but here are the error messages when I tried the first example. Other examples generate similar error messages:
root@blp-desktop:~/Desktop/jetson-inference/build/aarch64/bin# ./imagenet-console orange_0.jpg output_0.jpg
imagenet-console
args (3): 0 [./imagenet-console] 1 [orange_0.jpg] 2 [output_0.jpg]
imageNet -- loading classification network model from:
-- prototxt networks/googlenet.prototxt
-- model networks/bvlc_googlenet.caffemodel
-- class_labels networks/ilsvrc12_synset_words.txt
-- input_blob 'data'
-- output_blob 'prob'
-- batch_size 2
[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension '.caffemodel')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
Weights for layer conv1/7x7_s2 doesn't exist
[TRT] CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer conv1/7x7_s2 doesn't exist
[TRT] CaffeParser: ERROR: Attempting to access NULL weights
[TRT] Parameter check failed at: ../builder/Network.cpp::addConvolution::66, condition: kernelWeights.values != nullptr
error parsing layer type Convolution index 1
[TRT] device GPU, failed to parse caffe network
device GPU, failed to load networks/bvlc_googlenet.caffemodel
failed to load networks/bvlc_googlenet.caffemodel
imageNet -- failed to initialize.
imagenet-console: failed to initialize imageNet
here is what is inside of /networks
root@blp-desktop:~/Desktop/jetson-inference/data/networks# ls
alexnet_noprob.prototxt DetectNet-COCO-Bottle FCN-Alexnet-Cityscapes-HD multiped-500
alexnet.prototxt DetectNet-COCO-Chair FCN-Alexnet-Pascal-VOC ped-100
bvlc_alexnet.caffemodel DetectNet-COCO-Dog GoogleNet-ILSVRC12-subset Super-Resolution-BSD500
bvlc_googlenet.caffemodel detectnet.prototxt googlenet_noprob.prototxt
Deep-Homography-COCO facenet-120 googlenet.prototxt
DetectNet-COCO-Airplane FCN-Alexnet-Aerial-FPV-720p ilsvrc12_synset_words.txt
I have the same problem and I did try everything from the link and it still doesn’t work. Please let me know if there are any alternative.
Are you able to run any other networks, either image recognition or object detection?
$ ./imagenet-console orange_0.jpg test.jpg alexnet
$ ./imagenet-console orange_0.jpg test.jpg googlenet_12
Let us know if the output is any different trying these. And you can try object detection:
$ ./detectnet-console peds-004.jpg test.jpg
$ ./detectnet-console dog_1.jpg test.jpg coco-dog
Also can you run “ls -ll” command from the data/networks folder and check the sizes of the models to confirm they match?
~/workspace/jetson-inference/data/networks$ ls -ll
total 436832
-rw-r--r-- 1 nvidia nvidia 3557 May 30 14:00 alexnet_noprob.prototxt
-rw-r--r-- 1 nvidia nvidia 3629 May 30 14:00 alexnet.prototxt
-rw-r--r-- 1 nvidia nvidia 243862414 May 30 14:00 bvlc_alexnet.caffemodel
-rw-r--r-- 1 nvidia nvidia 134164912 May 20 17:04 bvlc_alexnet.caffemodel.2.1.GPU.FP16.engine
-rw-r--r-- 1 nvidia nvidia 53533754 May 30 14:00 bvlc_googlenet.caffemodel
-rw-rw-r-- 1 nvidia nvidia 15534224 May 8 14:49 bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine
drwxr-xr-x 2 nvidia nvidia 4096 Jan 18 14:26 Deep-Homography-COCO
drwxr-xr-x 2 nvidia nvidia 4096 Jan 2 13:32 DetectNet-COCO-Airplane
drwxr-xr-x 2 nvidia nvidia 4096 Jan 2 13:32 DetectNet-COCO-Bottle
drwxr-xr-x 2 nvidia nvidia 4096 Jan 2 13:32 DetectNet-COCO-Chair
drwxr-xr-x 2 nvidia nvidia 4096 Jan 2 13:32 DetectNet-COCO-Dog
-rw-r--r-- 1 nvidia nvidia 42924 May 8 12:32 detectnet.prototxt
drwxr-xr-x 2 nvidia nvidia 4096 Jan 2 13:18 facenet-120
drwxr-xr-x 2 nvidia nvidia 4096 Apr 11 2017 FCN-Alexnet-Aerial-FPV-720p
drwxr-xr-x 2 nvidia nvidia 4096 Nov 29 2016 FCN-Alexnet-Cityscapes-HD
drwxr-xr-x 2 nvidia nvidia 4096 Apr 11 2017 FCN-Alexnet-Pascal-VOC
drwxr-xr-x 2 nvidia nvidia 4096 May 31 14:31 GoogleNet-ILSVRC12-subset
-rw-r--r-- 1 nvidia nvidia 35776 May 30 14:00 googlenet_noprob.prototxt
-rw-r--r-- 1 nvidia nvidia 35861 May 30 14:00 googlenet.prototxt
-rw-r--r-- 1 nvidia nvidia 31675 May 8 12:32 ilsvrc12_synset_words.txt
drwxr-xr-x 2 nvidia nvidia 4096 Jan 2 13:18 multiped-500
drwxr-xr-x 2 nvidia nvidia 4096 Jan 2 13:18 ped-100
drwxr-xr-x 2 nvidia nvidia 4096 Feb 19 20:33 Super-Resolution-BSD500
I couldn’t get detectnet to work as well. I checked the networks folder and it seems like I lack a lot of the files there. Here is what I have:
total 124
-rw-r--r-- 1 jiahong jiahong 6392 Mar 1 15:21 alexnet_noprob.prototxt
-rw-r--r-- 1 jiahong jiahong 6392 Mar 1 15:21 alexnet.prototxt
-rw-r--r-- 1 jiahong jiahong 6392 Mar 1 15:21 bvlc_alexnet.caffemodel
-rw-r--r-- 1 jiahong jiahong 6392 Mar 1 15:21 bvlc_googlenet.caffemodel
-rw-r--r-- 1 jiahong jiahong 42924 May 31 13:25 detectnet.prototxt
-rw-r--r-- 1 jiahong jiahong 6392 Mar 1 15:21 googlenet_noprob.prototxt
-rw-r--r-- 1 jiahong jiahong 6392 Mar 1 15:21 googlenet.prototxt
-rw-r--r-- 1 jiahong jiahong 31675 May 31 13:25 ilsvrc12_synset_words.txt
If you could, could you tell me how to get other files? Thank you
Ah yes, I see, you are missing files and the models that you do have aren’t the correct size, so they didn’t download properly. As a test, try running these commands to re-download them from the original source:
$ cd jetson-inference/data/networks/
$ wget https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt -O googlenet.prototxt
$ wget http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel -O bvlc_googlenet.caffemodel
Then check “ls -ll” again - your bvlc_googlenet.caffemodel should be 53533754 bytes and googlenet.prototxt should be 35861 bytes. If sizes match, try re-running imagenet-console again.
To re-download all the models, clear your build directory and re-run cmake:
$ cd jetson-inference
$ rm -r -f build
$ mkdir build
$ cd build
$ cmake ../
$ make
$ sudo make install
Thank you! I was able to get it to work thanks to your help. I realize that after you re-run cmake and make, it corrupts the caffemodel. So it’s after make that you then you download the source. If you download first then cmake it would just corrupt the caffemodel again.
Much appreciate it!
for users in China, box.com is blocked, so your options:
- get a router with VPN.
- download the file to a computer with VPN.
- enable internetsharing on your computer with VPN, and let NANO access the internet through your machine.
Any other options?