Inference Yolov3 Caffe Model

Hello, I’m trying to inference yolov3 caffe model. the model is downloaded from this repo https://github.com/lewes6369/TensorRT-Yolov3

I’ve created a directory in /networks of jetson-inference

and I run the following command:
./detectnet-console dog_0.jpg output_0.jpg --prototxt=$NET/yolov3_608_trt.prototxt --model=$NET/yolov3_608.caffemodel --input_blob=data --output_cvg=coverage --output_bbox=bboxes --class_labels=$NET/labels.txt

I got the following error:

detectNet – loading detection network model from:
– prototxt networks/yolo/yolov3_608_trt.prototxt
– model networks/yolo/yolov3_608.caffemodel
– input_blob ‘data’
– output_cvg ‘coverage’
– output_bbox ‘bboxes’
– mean_pixel 0.000000
– mean_binary NULL
– class_labels networks/yolo/labels.txt
– threshold 0.500000
– batch_size 1

[TRT] TensorRT version 5.0.6
[TRT] loading NVIDIA plugins…
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16, INT8
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/yolo/yolov3_608.caffemodel.1.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading networks/yolo/yolov3_608_trt.prototxt networks/yolo/yolov3_608.caffemodel
could not parse layer type Upsample
[TRT] device GPU, failed to parse caffe network
[TRT] device GPU, failed to load networks/yolo/yolov3_608.caffemodel
detectNet – failed to initialize.
detectnet-console: failed to initialize detectNet