detectnet-console not working on Nano

Hi,

I just set up a new Jetson Nano and tried running detectnet-console by cloning the etson-inference` repo and building it from source as per the instructions.

After building I moved to ./build/aarch64/bin, where all build bin files should be and ran:

./detectnet-console dog.jpg dog2.jpg coco-dog which should detect dogs in an image using the coco-dog model.

However, the command did not work and spat out the following messages:

detectnet-console
  args (4):  0 [./detectnet-console]  1 [dog.jpg]  2 [dog2.jpg]  3 [coco-dog]  
 
 
detectNet -- loading detection network model from:
          -- prototxt     networks/DetectNet-COCO-Dog/deploy.prototxt
          -- model        networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel
          -- input_blob   'data'
          -- output_cvg   'coverage'
          -- output_bbox  'bboxes'
          -- mean_pixel   0.000000
          -- class_labels networks/DetectNet-COCO-Dog/class_labels.txt
          -- threshold    0.500000
          -- batch_size   2
 
[TRT]  TensorRT version 5.0.6
[TRT]  detected model format - caffe  (extension '.caffemodel')
[TRT]  desired precision specified for GPU: FASTEST
[TRT]  requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]  native precisions detected for GPU:  FP32, FP16
[TRT]  selecting fastest native precision for GPU:  FP16
[TRT]  attempting to open engine cache file .2.1.GPU.FP16.engine
[TRT]  cache file not found, profiling network model on device GPU
[TRT]  device GPU, loading  
[TRT]  CaffeParser: Could not open file
[TRT]  CaffeParser: Could not parse model file
[TRT]  device GPU, failed to parse caffe network
device GPU, failed to load
detectNet -- failed to initialize.
detectnet-console:   failed to initialize detectNet

No other commands such as segnet-sonsole work either generating a similar error message.

Any ideas about what could be wrong?

Hi there, it’s looking like it’s unable to find the DetectNet-COCO-Dog model. Maybe it didn’t download properly?

You can try running these commands to download it again:

$ cd jetson-inference/data/networks
$ wget --no-check-certificate 'https://nvidia.box.com/shared/static/3qdg3z5qvl8iwjlds6bw7bwi2laloytu.gz' -O DetectNet-COCO-Dog.tar.gz
$ tar -xzvf DetectNet-COCO-Dog.tar.gz

If you continue to have problems with the models, I recommend re-building (which will re-download all the models):

$ cd jetson-inference
$ rm -r -f build
$ mkdir build
$ cd build
$ cmake ../
$ make

Hi,

that worked. Thank you. I was simply missing the pre-trained model. Because this tutorial https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-console-2.md says “The following pretrained DetectNet models are included with the tutorial” I assumed that the models ship with the repo by default.

By the way, where did you find that link you used in wget command? I would like to find the link for ped-100 model but can’t find it.

Update: I realised that the models are supposed to be downloaded via wget commands located in CMakePreBuild.sh, but all my wget commands are failing with “Unable to establish SSL connection” which is the reason I was missing some models.

The way I solved this issue is I simply found the tar.gz link I needed in for the model CMakePreBuild.sh and downloaded the model manually via browser which worked and did not fail due to an SSL error

Thanks tadejb3tsi, that’s correct - the models should be downloaded automatically when building the repo, but if there was a problem, you can find the URLs in the CMakePreBuild.sh script.