Unable to run Deepstream on a custom trained SSD

Hi,

I’ve created a custom config file for running Deepstream on Jetson Nano

gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=/deepstream_sdk_v4.0.1_jetson/samples/models/tmp_uff_b24_fp16.engine
uff-file=deepstream_sdk_v4.0.1_jetson/samples/models/tmp.uff
input-dims=3;300;300;0
uff-input-blob-name=Input
output-blob-names=NMS
labelfile-path=/deepstream_sdk_v4.0.1_jetson/samples/models/custom/labels.txt
batch-size=24
process-mode=1
model-color-format=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=3
interval=0
gie-unique-id=1
custom-lib-path=libflattenconcat.so

If custom-lib-path=libflattenconcat.so is used the below error is observed
Can see the feed for just a single frame and later it is getting aborted giving the following error

ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:parseBoundingBox(): Could not find output coverage layer for parsing objects

If custom-lib-path=sources/nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so is used

The following error is observed
deepstream-app: nvdsiplugin_ssd.cpp:72: FlattenConcat::FlattenConcat(const void*, size_t): Assertion `mConcatAxisID == 1 || mConcatAxisID == 2 || mConcatAxisID == 3’ failed.
Aborted (core dumped)

If the below config is used I’m getting

gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=/deepstream_sdk_v4.0.1_jetson/samples/models/tmp_uff_b24_fp16.engine
uff-file=deepstream_sdk_v4.0.1_jetson/samples/models/tmp.uff
input-dims=3;300;300;0
uff-input-blob-name=Input
output-blob-names=NMS
labelfile-path=/deepstream_sdk_v4.0.1_jetson/samples/models/custom/labels.txt
batch-size=24
process-mode=1
model-color-format=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=3
interval=0
gie-unique-id=1
parse-bbox-func-name=NvDsInferParseCustomSSDUff
custom-lib-path=libflattenconcat.so

The following error is shown
ERROR
nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Could not find parse func ‘NvDsInferParseCustomSSDUff’ in custom library

What are the parameters to be set in parse-bbox-func-name and custom-lib-path

Hi,

The layer name/architecture of your customized SSD might be slightly updated.

Do you use this config file?
/usr/src/tensorrt/samples/sampleUffSSD/config.py?

If yes, would you mind to try the config file shared in this GitHub first?
https://github.com/AastaNV/TRT_object_detection/tree/master/config

Thanks.

I used https://github.com/AastaNV/TRT_object_detection/tree/master/config to get my UFF model

For now I’m using the following config which is giving me

ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:parseBoundingBox(): Could not find output coverage layer for parsing objects

Config

gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=/deepstream_sdk_v4.0.1_jetson/samples/models/tmp_uff_b24_fp16.engine
uff-file=deepstream_sdk_v4.0.1_jetson/samples/models/tmp.uff
input-dims=3;300;300;0
uff-input-blob-name=Input
output-blob-names=NMS
labelfile-path=/deepstream_sdk_v4.0.1_jetson/samples/models/custom/labels.txt
batch-size=24
process-mode=1
model-color-format=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=3
interval=0
gie-unique-id=1
custom-lib-path=libflattenconcat.so

Same Problem here and no solution yet.
https://devtalk.nvidia.com/default/topic/1064421/deepstream-sdk/objectdetector_ssd-fails-on-ssd_mobilenet_v2/post/5389947/#5389947

Hi,

It looks like this issue is not related to Deepstream but TensorRT.
Could you try to run your model with trtexec first?

Thanks.