Running facenet caffe model

Hi,
Is there a way in the deepstream sdk 4 to create an engine file from a caffemodel?

I’m using the facenet caffemodel downloaded from ‘https://nvidia.box.com/shared/static/wjitc00ef8j6shjilffibm6r2xxcpigz.gz
for that I created my own config file like this

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=…/…/models/facenet-120/facenet.caffemodel
proto-file=…/…/models/facenet-120/deploy.prototxt
model-engine-file=…/…/models/facenet-120/facenet.caffemodel_b1_fp32.engine
labelfile-path=…/…/models/facenet-120/class_labels.txt

batch-size=1
process-mode=1
model-color-format=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
#parse-func=4
output-blob-names=output_bbox;output_cov

[class-attrs-all]
threshold=0.2
group-threshold=1

Set eps=0.7 and minBoxes for enable-dbscan=1

eps=0.2
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=1920
detected-max-h=1920

In the config file I’m passing the caffemodel and the protext file and the engine file path like this ‘/models/facenet-120/facenet.caffemodel_b1_fp32.engine’ hoping it’ll generate an engine file which can be used for inference. But I can’t figure out what parameters to pass to the output layer in ‘output-blob-names’.

is there a config file available for this model?

Hi,

YES. Deepstream supports Caffe-based model.

The default bounding box parser in Deepstream is for ResNet.
We also have a Detectnet bounding box parser can be used for the facenet:
https://github.com/AastaNV/DeepStream/tree/master/parser_detectnet

Update these configure to link the customized parser:

parse-func=0
parse-bbox-func-name=parse_bbox_custom_detectnet
parse-bbox-lib-name=/path/to/libnvparsebbox.so

Thanks.