I’ve been attempting to run through this tutorial with my newly acquired jetson nano dev board.
I’m powering the board via barrel jack connector and made sure the nvprofile was set to 0.
When running ./imagenet-console orange_0.jpg output_0.jpg, I’ve noticed that I get to the place where it says building CUDA engine, this could take a few minutes… nothing happens after. Never completes.
I’m not getting a crash or anything like I’ve heard other people complain about… but I’ve let this run hours and it never actually succeeds. (hitting ctrl+c does eventually break out to console)
imagenet-console args (3): 0 [./imagenet-console] 1 [orange_0.jpg] 2 [output_0.jpg] imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2 [TRT] TensorRT version 5.0.6 [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/bvlc_googlenet.caffemodel.2.1.GPU.FP16.engine [TRT] cache file not found, profiling network model on device GPU [TRT] device GPU, loading networks/googlenet.prototxt networks/bvlc_googlenet.caffemodel [TRT] retrieved Output tensor "prob": 1000x1x1 [TRT] retrieved Input tensor "data": 3x224x224 [TRT] device GPU, configuring CUDA engine [TRT] device GPU, building FP16: ON [TRT] device GPU, building INT8: OFF [TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)