Problem with SSD Mobilenet TensorRT conversion

HI, I am trying to convert SSD mobilenet model (ssd_mobilenet_v2_coco_2018_03_29) to an Engine file. I followed all the steps given in the page, GitHub - AastaNV/TRT_object_detection: Python sample for referencing object detection model with TensorRT. I get the below error,
Traceback (most recent call last):
File “main.py”, line 19, in
ctypes.CDLL(“/home/einfochips/Xavier board all codes latest/TRT_object_detection-master/lib/libflattenconcat.so”)
File “/usr/lib/python3.6/ctypes/init.py”, line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: libnvinfer.so.5: cannot open shared object file: No such file or directory

I followed the below link

and tried to build “libflattenconcat.so”, but I am unable to build. As I checked, I am using TensorRT 7.0 version, where this file comes pre-installed.

I am stuck here for days. KIndly help me to solve this issue.

Hardware : Jetson Nano
OS: Ubuntu 18.04
Cuda : version 10.1
Tensorflow : >2
TensorRT 7.0

Hi,

You don’t need to manually mount the plugin with TensorRT v7.0.
Please remove the below line and try it again.

Thanks.

@AastaLLL Thanks. I removed the line18 : ctypes.CDLL(“lib/libflattenconcat.so”)

I get the below error,

2020-12-08 10:31:53.822486: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Graph created successfully
NOTE: UFF has been tested with TensorFlow 1.15.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.9
=== Automatically deduced input nodes ===
[name: “Input”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “shape”
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]
=========================================

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:226: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:143] Marking [‘NMS’] as outputs
No. nodes: 1094
UFF Output written to tmp.uff
#assertionflattenConcat.cpp,49
Aborted (core dumped)

After this I get the error saying “Python3 stopped unexpectedly”, you dont have enough memory to analyse the problem.

But $ df -h gives,

Filesystem Size Used Avail Use% Mounted on
/dev/mmcblk0p1 59G 27G 30G 48% /
none 1.8G 0 1.8G 0% /dev
tmpfs 2.0G 20M 2.0G 1% /dev/shm
tmpfs 2.0G 44M 1.9G 3% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
tmpfs 397M 16K 397M 1% /run/user/120
tmpfs 397M 112K 397M 1% /run/user/1000

@AastaLLL @kayccc: hello team, kindly help to solve this issue. I have checked the board is at maximum performance level using command: sudo nvpmodel -q, which gives > NV Power Mode: MAXN

Unable to figure out the source of error

Hi,

Sorry for the late update.

The error is independent of the device clock mode, but the configure itself.
Have you updated the configure based on your model yet?
https://github.com/AastaNV/TRT_object_detection/blob/master/config/model_ssd_mobilenet_v2_coco_2018_03_29.py

If yes, please run the tmp.uff model via trtexec with VERBOSE enabled, and share the log with us.

$ /usr/src/tensorrt/bin/trtexec --uff=tmp.uff --uffInput=Input,3,300,300 --output=NMS --verobse

Thanks.

@AastaLLL: Hi, I did not do any modifications in the config, as I am running the input model as it is. Please find the output of log as you mentioned,

output_log (468.3 KB)

Hi,

Sorry that the repository is published for TensorRT 5.0 and doesn’t update for a while.

For TensorRT 7.1.3 with ssd_mobilenet_v2_coco_2018_03_29 downloaded from this page.
You can run the main.py after applying the below update:

diff --git a/config/model_ssd_mobilenet_v2_coco_2018_03_29.py b/config/model_ssd_mobilenet_v2_coco_2018_03_29.py
index 3c9f3b8..4046ce5 100644
--- a/config/model_ssd_mobilenet_v2_coco_2018_03_29.py
+++ b/config/model_ssd_mobilenet_v2_coco_2018_03_29.py
@@ -55,11 +55,13 @@ def add_plugin(graph):
     concat_box_loc = gs.create_plugin_node(
         "concat_box_loc",
         op="FlattenConcat_TRT",
+        axis=1
     )

     concat_box_conf = gs.create_plugin_node(
         "concat_box_conf",
         op="FlattenConcat_TRT",
+        axis=1
     )

     namespace_plugin_map = {
diff --git a/main.py b/main.py
index 09537e5..a002c33 100644
--- a/main.py
+++ b/main.py
@@ -15,7 +15,6 @@ import graphsurgeon as gs
 #from config import model_ssd_mobilenet_v1_coco_2018_01_28 as model
 from config import model_ssd_mobilenet_v2_coco_2018_03_29 as model

-ctypes.CDLL("lib/libflattenconcat.so")
 COCO_LABELS = coco.COCO_CLASSES_LIST


@@ -28,7 +27,7 @@ runtime = trt.Runtime(TRT_LOGGER)
 # compile model into TensorRT
 if not os.path.isfile(model.TRTbin):
     dynamic_graph = model.add_plugin(gs.DynamicGraph(model.path))
-    uff_model = uff.from_tensorflow(dynamic_graph.as_graph_def(), model.output_name, output_filename='tmp.uff')
+    uff_model = uff.from_tensorflow(dynamic_graph.as_graph_def(), model.output_name, output_filename='tmp.uff',text=True)

     with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
         builder.max_workspace_size = 1 << 28

We have checked this and confirm it works in our environment.
Please share your result with us and sorry for the inconvenience.

Thanks.