I followed exactly from the DIGITS tutorial of retrainning the detection model from the website: jetson-inference/detectnet-training.md at master · dusty-nv/jetson-inference · GitHub
At the end, I’ve got the dog detection model. It’s working perfectly when I test it on the website using DIGITS, so I downloaded the model and extracted it on my Xavier to get the following files:
train_val.prototxt
deploy.prototxt
original.prototxt
solver.prototxt
mean.binaryproto
snapshot_iter_38600.caffemodel
info.json
Then I modify the deepstream sample config files “source4_720p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt” and “config_infer_primary.txt” to make use my own dog detection model. Specifically, I’ve changed the property group into following:
[property]
net-scale-factor=0.0039215697906911373
model-file=…/…/models/DOG_Model/snapshot_iter_38600.caffemodel
proto-file=…/…/models/DOG_Model/deploy.prototxt
labelfile-path=…/…/models/DOG_Model/label.txt
output-blob-names=bboxes
batch-size=30
0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=2
interval=0
gie-unique-id=1
parse-func=1
Also my label.txt includes simply(based on the DIGITS tutorial website):
dontcare
dog
But, when I run the deepstream-app, I only get the video playing with no bouding box showing. In addition, the console shows the following messages:
Error: Could not find coverage layer while parsing output.
Error: Could not find coverage layer while parsing output.
.
.
.
Error: Could not find coverage layer while parsing output.
Error: Could not find coverage layer while parsing output.
** INFO: <bus_callback:121>: Received EOS. Exiting …
Quitting
App run successful
I don’t know where went wrong. Is it because I can’t simply utilize the model trained from DIGITS in Deepstream sample or is it because the label.txt should only include “dog” or what? I am kind of lost.