I am able to run Jetson-inference imagenet code sample which runs with googlenet by default. I tried to run the same code with Alexnet as an argument. But execution gets killed while generating tensorRT engine/CUDA engine. Following is the log:
~/.../build/aarch64/bin$ ./imagenet-console orange_0.jpg output3.jpg alexnet imagenet-console args (4): 0 [./imagenet-console] 1 [orange_0.jpg] 2 [output3.jpg] 3 [alexnet] imageNet -- loading classification network model from: -- prototxt networks/alexnet.prototxt -- model networks/bvlc_alexnet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2 [TRT] TensorRT version 5.0.6 [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/bvlc_alexnet.caffemodel.2.1.GPU.FP16.engine [TRT] cache file not found, profiling network model on device GPU [TRT] device GPU, loading networks/alexnet.prototxt networks/bvlc_alexnet.caffemodel [TRT] retrieved Output tensor "prob": 1000x1x1 [TRT] retrieved Input tensor "data": 3x227x227 [TRT] device GPU, configuring CUDA engine [TRT] device GPU, building FP16: ON [TRT] device GPU, building INT8: OFF [TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded) Killed
Am I missing something?
Thanks in advance for the help!