imagenet-console segfault -- sounds nvinfer::builder::buildGraph encounter NULL pointer

This is the basic test on Jetson nano, should work well, any idea? Thanks in advance!

==Running log with segmentation fault
~/Downloads/jetson-inference/build/aarch64/bin$ ./imagenet-console orange_0.jpg orange-out.jpg
imagenet-console
args (3): 0 [./imagenet-console] 1 [orange_0.jpg] 2 [orange-out.jpg]

imageNet – loading classification network model from:
– prototxt networks/googlenet.prototxt
– model networks/bvlc_googlenet.caffemodel
– class_labels networks/ilsvrc12_synset_words.txt
– input_blob ‘data’
– output_blob ‘prob’
– batch_size 2

[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel
[TRT] failed to retrieve tensor for Output “prob”
[TRT] device GPU, configuring CUDA engine
[TRT] device GPU, building FP16: ON
[TRT] device GPU, building INT8: OFF
[TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
Segmentation fault (core dumped)

==using valgrind to check, sounds buildGraph function encounter NULL pointer, any idea
[TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
==10465== Invalid read of size 8
==10465== at 0x4FCFC20: nvinfer1::builder::buildGraph(nvinfer1::CudaEngineBuildConfig const&, nvinfer1::builder::Graph&, nvinfer1::Network const&) (in /usr/lib/aarch64-linux-gnu/libnvinfer.so.5.0.6)
==10465== by 0x4FD0183: nvinfer1::builder::buildEngine(nvinfer1::CudaEngineBuildConfig&, nvinfer1::rt::HardwareContext const&, nvinfer1::Network const&) (in /usr/lib/aarch64-linux-gnu/libnvinfer.so.5.0.6)
==10465== by 0x503C9F3: nvinfer1::builder::Builder::buildCudaEngine(nvinfer1::INetworkDefinition&) (in /usr/lib/aarch64-linux-gnu/libnvinfer.so.5.0.6)
==10465== by 0x491B1E7: tensorNet::ProfileModel(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, unsigned int, precisionType, deviceType, bool, nvinfer1::IInt8Calibrator*, std::ostream&) (in /home/bz/Downloads/jetson-inference/build/aarch64/lib/libjetson-inference.so)
==10465== by 0x491B973: tensorNet::LoadNetwork(char const*, char const*, char const*, char const*, std::vector<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, unsigned int, precisionType, deviceType, bool, nvinfer1::IInt8Calibrator*, CUstream_st*) (in /home/bz/Downloads/jetson-inference/build/aarch64/lib/libjetson-inference.so)
==10465== by 0x491B44B: tensorNet::LoadNetwork(char const*, char const*, char const*, char const*, char const*, unsigned int, precisionType, deviceType, bool, nvinfer1::IInt8Calibrator*, CUstream_st*) (in /home/bz/Downloads/jetson-inference/build/aarch64/lib/libjetson-inference.so)
==10465== by 0x491679B: imageNet::init(char const*, char const*, char const*, char const*, char const*, char const*, unsigned int, precisionType, deviceType, bool) (in /home/bz/Downloads/jetson-inference/build/aarch64/lib/libjetson-inference.so)
==10465== by 0x49165EB: imageNet::init(imageNet::NetworkType, unsigned int, precisionType, deviceType, bool) (in /home/bz/Downloads/jetson-inference/build/aarch64/lib/libjetson-inference.so)
==10465== by 0x49163A7: imageNet::Create(imageNet::NetworkType, unsigned int, precisionType, deviceType, bool) (in /home/bz/Downloads/jetson-inference/build/aarch64/lib/libjetson-inference.so)
==10465== by 0x4916BEF: imageNet::Create(int, char**) (in /home/bz/Downloads/jetson-inference/build/aarch64/lib/libjetson-inference.so)
==10465== by 0x11074B: main (in /home/bz/Downloads/jetson-inference/build/aarch64/bin/imagenet-console)
==10465== Address 0x0 is not stack’d, malloc’d or (recently) free’d
==10465==
==10465==
==10465== Process terminating with default action of signal 11 (SIGSEGV)
==10465== Access not within mapped region at address 0x0

Hi BruceZhang, are you using the default SD card image, or did you setup your Nano with JetPack using NVIDIA SDK Manager?

Maybe your googlenet model got corrupted or wasn’t downloaded properly, can you try re-downloading it?

$ cd jetson-inference/data/networks/
$ wget https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt -O googlenet.prototxt
$ wget http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel -O bvlc_googlenet.caffemodel

If that doesn’t work, please try re-building the repo, which will download all the models again:

$ cd jetson-inference
$ rm -r -f build
$ mkdir build
$ cd build
$ cmake ../
$ make

Many thanks! Your wget links are good.

I used SD card image. BUT I found all links of nvidia.box.com in CMakePreBuild.sh failed, as the IP got for “nvidia.app.box.com” is unreachable, fortunately, my laptop got a different IP which was reachable to download, then scp files to the board.