Hi,
We can run ssd_mobilenet_v2 with deepstream-app successfully.
Here are our steps for your reference:
1. Compile objectDetector_SSD sample:
$ cd /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD
$ make -C nvdsinfer_custom_impl_ssd
2. Prepare ssd_mobilenet_v2 uff model:
$ wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
$ tar zxvf ssd_mobilenet_v2_coco_2018_03_29.tar.gz
Download attached config.py and generate uff model with:
$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py ./ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb -o ssd_mobilenet_v2.uff -O NMS -p ./config.py
3. Customize config file for ssd_mobilenet_v2
diff --git a/config_infer_primary_ssd.txt b/config_infer_primary_ssd.txt
index bafdff7..9bed8de 100755
--- a/config_infer_primary_ssd.txt
+++ b/config_infer_primary_ssd.txt
@@ -62,9 +62,9 @@ gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
model-color-format=0
-model-engine-file=sample_ssd_relu6.uff_b1_fp32.engine
+model-engine-file=ssd_mobilenet_v2.uff_b1_fp32.engine
labelfile-path=ssd_coco_labels.txt
-uff-file=sample_ssd_relu6.uff
+uff-file=ssd_mobilenet_v2.uff
uff-input-dims=3;300;300;0
uff-input-blob-name=Input
batch-size=1
@@ -74,7 +74,7 @@ num-detected-classes=91
interval=0
gie-unique-id=1
is-classifier=0
-output-blob-names=MarkOutput_0
+output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so
Then execute it with:
$ deepstream-app -c deepstream_app_config_ssd.txt
Thanks.
config.py.txt (2 KB)