I am using jetson nano module i flashed the jetson nano with jetpack 4.2 into 16GB sd-card and i downloaded the jetson-inference source code from git and build using steps provided in the link itself and build was success
the problem is i wanted to try to run helloAIworld application when i try to run
imagenet-console app it was giving segmemation fault what could be the possible mistakes i have done
[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
[TRT] failed to retrieve tensor for Output “prob”
[TRT] device GPU, configuring CUDA engine
[TRT] device GPU, building FP16: ON
[TRT] device GPU, building INT8: OFF
[TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
Segmentation fault (core dumped)
bonthu$:
I followed above method, there 's no difference in result.
I remove build folder and re-build after downloading googlenet.prototxt and bvlc_googlenet.caffemodel in /jetson-inference/data/networks
[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
[TRT] failed to retrieve tensor for Output “prob”
[TRT] device GPU, configuring CUDA engine
[TRT] device GPU, building FP16: ON
[TRT] device GPU, building INT8: OFF
[TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
Segmentation fault (core dumped)
I found difference in networks folder.
Here’s list of it. I think i need to use googlenet_noprob.prototxt …but i don’t know how to use that instead of googlenet.prototxt
Are you able to try it using the default SD card image?
This could help narrow down if it is something related to the SDK Manager or not.
You should use googlenet.prototxt, not googlenet_noprob.txt. I noticed both of your IP’s are from Asia, maybe it is some download issue. Now that you re-built, can you try these steps again once more? I just re-cloned and re-built from scratch here, and it works as expected.
Thank you for yourhelp. I tried to re-built 3 times, but it gives same result.
I delete build folder and download new files in networks folder using below command
Can you try using a fresh SD card image? Maybe there is something off about the setup of the driver packages or protobuf package. Somehow it is not able to find the layer blobs from the network model.
well, yes, box.com is blocked here in China. I am guessing quite a lot people are circumventing the issue by actually manually downloading those file and put it where they need to be. but should that be a problem?
The CMakePreBuild.sh script downloads from Box.com (Google Drive doesn’t allow automatic downloading from wget for large files), but those URLs you quote in your post are the original sources from GitHub and Berkeley website. So try running those commands first and see if you can get Googlenet working.