Hi,
I just set up a new Jetson Nano and tried running detectnet-console by cloning the etson-inference` repo and building it from source as per the instructions.
After building I moved to ./build/aarch64/bin
, where all build bin files should be and ran:
./detectnet-console dog.jpg dog2.jpg coco-dog
which should detect dogs in an image using the coco-dog model.
However, the command did not work and spat out the following messages:
detectnet-console
args (4): 0 [./detectnet-console] 1 [dog.jpg] 2 [dog2.jpg] 3 [coco-dog]
detectNet -- loading detection network model from:
-- prototxt networks/DetectNet-COCO-Dog/deploy.prototxt
-- model networks/DetectNet-COCO-Dog/snapshot_iter_38600.caffemodel
-- input_blob 'data'
-- output_cvg 'coverage'
-- output_bbox 'bboxes'
-- mean_pixel 0.000000
-- class_labels networks/DetectNet-COCO-Dog/class_labels.txt
-- threshold 0.500000
-- batch_size 2
[TRT] TensorRT version 5.0.6
[TRT] detected model format - caffe (extension '.caffemodel')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file .2.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading
[TRT] CaffeParser: Could not open file
[TRT] CaffeParser: Could not parse model file
[TRT] device GPU, failed to parse caffe network
device GPU, failed to load
detectNet -- failed to initialize.
detectnet-console: failed to initialize detectNet
No other commands such as segnet-sonsole work either generating a similar error message.
Any ideas about what could be wrong?