Running object detection models with DeepStream

Hi there,

I am trying to run a ssdlite_mobilenet_v2_coco_2018_05_09 model with Deepstream. Following the documentation under /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD, I am using the config file /usr/src/tensorrt/samples/sampleUffSSD/config.py to produce the corresponding UFF file that is needed to run the application, which works for ssd_inception_v2_coco but not for my model. Could you please provide me with the right config model for my case?

Thank you in advance.

Can you please elaborate why it does not work for your model?
I tried it on my side, it works well as below:

Model: ssdlite_mobilenet_v2_coco under https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md (it’s ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz)
Docker : nvcr.io/nvidia/tensorflow:19.07-py3 + TensorRT-5.1.5.0
Config file: /usr/src/tensorrt/samples/sampleUffSSD/config.py
command and log:

ssdlite_mobilenet_v2_coco_2018_05_09# convert-to-uff --input-file frozen_inference_graph.pb -O NMS -p config.py
2019-12-12 15:16:00.262114: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.1
WARNING: Logging before flag parsing goes to stderr.
W1212 15:16:05.170502 140165582161728 deprecation_wrapper.py:119] From /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py:18: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

Loading frozen_inference_graph.pb
W1212 15:16:05.173045 140165582161728 deprecation_wrapper.py:119] From /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py:231: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

NOTE: UFF has been tested with TensorFlow 1.12.0. Other versions are not guaranteed to work
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
W1212 15:16:05.279242 140165582161728 deprecation_wrapper.py:119] From /usr/lib/python3.6/dist-packages/graphsurgeon/_utils.py:2: The name tf.NodeDef is deprecated. Please use tf.compat.v1.NodeDef instead.

WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.
WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.
UFF Version 0.6.3
=== Automatically deduced input nodes ===
[name: “Input”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “shape”
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
W1212 15:16:06.065682 140165582161728 deprecation_wrapper.py:119] From /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:179: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
No. nodes: 606
UFF Output written to frozen_inference_graph.uff

Hi there,

OK. The uff has been generated, but still I cannot run the application. It says:

UffParser: Parser error: BoxPredictor_0/Reshape: Reshape: Volume mismatch. Note: Initial Volume = 4332, New Volume = 8664

Input size = 300 x 600
Object Classes = 15

Any ideas how to fix this?

Thanks.

Reshape only change the value of dimensions, but does not change the total size of tensor.
From this error, seems, after Reshape, the total size of tensor is changed due to the incorrect output dimensions. Since 8664 is double of 4332, please check if there is one dimension value is doubled wrongly.

Can this be checked via any of the config files? Could you please name the attribute to tweak or should I look into the network definition?

You can use https://github.com/lutzroeder/netron to view your network. Find the node of “BoxPredictor_0/Reshape” and check the dimension.

I haven’t tweaked the model. Why would there be an issue with the dimension?

As showed in my convert log, Input size is 300 x 300 instead of 300 x 600