Using uff model in deepstream-test3

Hi all,

I am testing deepstream python app deepstream-test3.

I want to change the model to uff and tried to modify the config file according to config_infer_primary_ssd.txt in objectDetector_SSD.

Error Message:
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest2_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

Config file:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
#model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
#labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
#int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
offsets=127.5;127.5;127.5
model-engine-file=sample_ssd_relu6.uff_b1_gpu0_fp32.engine
labelfile-path=ssd_coco_labels.txt
uff-file=sample_ssd_relu6.uff
infer-dims=3;300;300
uff-input-order=0
uff-input-blob-name=Input
force-implicit-batch-dim=1
batch-size=1
network-mode=1
process-mode=1
model-color-format=0
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

Hi,

We can update the dstest2_pgie_config.txt to use SSD uff model in objectDetector_SSD as following.

...
[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
#model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=sample_ssd_relu6.uff_b1_gpu0_fp32.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
#int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
model-color-format=0
labelfile-path=../../../objectDetector_SSD/ssd_coco_labels.txt
uff-file=../../../objectDetector_SSD/sample_ssd_relu6.uff
infer-dims=3;300;300
uff-input-order=0
uff-input-blob-name=Input
force-implicit-batch-dim=1
batch-size=1
network-mode=0
process-mode=1
num-detected-classes=91
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=MarkOutput_0
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=../../../objectDetector_SSD/nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so
#scaling-filter=0
#scaling-compute-hw=0

Please give it a try and share the result with us.
Thanks.

ERROR: Could not open lib: /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/test/…/…/…/objectDetector_SSD/nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so, error string: /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/test/…/…/…/objectDetector_SSD/nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so: cannot open shared object file: No such file or directory

How can I get the file? I searched the whole directory and could not find the file.

I downloaded the file from the web but the same error message showed up. I checked the path is correct

If I am using a custom model (YOLOv3) other than the sample SSD, what is the value for custom-lib-path?

Hi,

Based on the configure file you shared above, it looks like you are using the SSD uff model as input rather than YOLO.

uff-file=sample_ssd_relu6.uff

As a result, we assume you should run the objectDetector_SSD before.
The library will be generated once you compile the bbox parser in /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD.

We don’t support uff based YOLO model but only darknet format from the author.

The procedure is similar to the SSD model.
Please follow the REAME in the /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo.
And update the model and bbox parser in deepstream-test3 to the corresponding.

Thanks.