Problem converting ONNX model to TensorRT Engine for SSD Mobilenet V2

Hi, anujfulari

Have you fixed this issue?
If not, would you mind share the model file with us?

Thanks.

Hi, @AastaLLL,

Issue is not fixed. Actually I have shared model files with you through direct message

Hi,

Sorry for the missing.
We will try to reproduce this issue and update more information with you later.

Thanks.

I have sent the new link via Direct message

Hi,
I had the same problem. So here is what I did to get rid of the uint8 input:

  1. using graphsurgeon to find the Input Node (‘image_tensor’)

  2. Remove the node and insert a custom input node with type float32

  3. Depending on yout tensorflow Version: Finding the Cast/ToFloat Node, that casts from uint8 -> float and replace the expected input type to float as well. Actually it should be possible to skip the cast/toFloat node entirely but that screwed up by network.

4.Write the modified graph to pb.
5. Use this pb to convert to onnx.
6. Parse the created onnx file.

The code I used for manipulating the graph.pb

from tensorflow.core.framework.tensor_shape_pb2 import TensorShapeProto
import graphsurgeon as gs
import numpy as np

graph = gs.DynamicGraph('/PATH/TO/FROZEN_GRAPH.pb')
image_tensor = graph.find_nodes_by_name('image_tensor')

print('Found Input: ', image_tensor)

cast_node = graph.find_nodes_by_name('Cast')[0] #Replace Cast with ToFloat if using tensorflow <1.15
print("Input Field", cast_node.attr['SrcT'])

cast_node.attr['SrcT'].type=1 #Changing Expected type to float
print("Input Field", cast_node.attr['SrcT'])


input_node = gs.create_plugin_node(name='InputNode', op='Placeholder', shape=(1,HEIGHT,WIDTH,3))

namespace_plugin_map = {

    'image_tensor': input_node
}


graph.collapse_namespaces(namespace_plugin_map)

graph.write('GRAPH_NO_UINT8.pb')
graph.write_tensorboard('tensorboard_log_modified_graph')

Later on when specifying the inputs for onnx-conversion you’ll have to replace “image_tensor:0” with “InputNode:0”

Hi,

Sorry for keeping you both waiting.

We are checking this issue internally.
Will update more information with you later.

Thanks.

Thank you @AastaLLL

Thank you @joel.oswald,
I’ll try your suggestion.

@anujfulari,

Did you happen to convert your model to onnx and then tensorrt(c++ version) using the above solution??

@joel.oswald,

I tried your solution. but could succeed.

I am using ssd inception v2 2017_!1_17 model from tensorflow.

SHould I use opset 8/ 9 or is it okay if i use 11 ??

Because when I use 11, I get the different error.

Did you happen to convert ssd inception to ONNX - tensorrt and run inference (c++ version).?

Hi @god_ra,
I also used opset 11.

The solution does not fix everything, just the unsupported-UINT8 error.
So if there is still another problem while parsing the uff-file this will not be fixed by only modifying the input node.

I used opset 11 too.

Did you successfully run inference in TensorRT c++ verison for any tensorflow object detection models?? custom trained models.

No, I use the python-api of TensorRT. (But I created some working custom-trained TensorRT engines for object detection.)

Python api works well with custom trained ssd inception models.

C++ is not producing any detections. also not giving any error messgaes for me.

I did tensorflow - UFF - tensorRT (python works, c++ does not work)

I do not know what could be the problem. because there are no error messages.

@AastaLLL @joel.oswald @kayccc,
I got this error after the above solution mentioned by @joel.oswald
it seems NonZero plugin is missing.
But how io write a custom plugin for it…
Any work around for this???

Any idea

[08/10/2020-14:16:52] [V] [TRT] ModelImporter.cpp:103: Parsing node: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Greater [Greater]
[08/10/2020-14:16:52] [V] [TRT] ModelImporter.cpp:119: Searching for input: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Reshape_9:0
[08/10/2020-14:16:52] [V] [TRT] ModelImporter.cpp:119: Searching for input: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_42/Greater/y:0
[08/10/2020-14:16:52] [V] [TRT] ModelImporter.cpp:125: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Greater [Greater] inputs: [Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Reshape_9:0 -> (-1)], [Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_42/Greater/y:0 -> ()],
[08/10/2020-14:16:52] [V] [TRT] ImporterContext.hpp:141: Registering layer: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Greater for ONNX node: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Greater
[08/10/2020-14:16:52] [V] [TRT] ImporterContext.hpp:116: Registering tensor: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Greater:0 for ONNX tensor: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Greater:0
[08/10/2020-14:16:52] [V] [TRT] ModelImporter.cpp:179: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Greater [Greater] outputs: [Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Greater:0 -> (-1)],
[08/10/2020-14:16:52] [V] [TRT] ModelImporter.cpp:103: Parsing node: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Where [NonZero]
[08/10/2020-14:16:52] [V] [TRT] ModelImporter.cpp:119: Searching for input: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Greater:0
[08/10/2020-14:16:52] [V] [TRT] ModelImporter.cpp:125: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Where [NonZero] inputs: [Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan_9/Greater:0 -> (-1)],
[08/10/2020-14:16:52] [I] [TRT] ModelImporter.cpp:135: No importer registered for op: NonZero. Attempting to import as plugin.
[08/10/2020-14:16:52] [I] [TRT] builtin_op_importers.cpp:3659: Searching for plugin: NonZero, plugin_version: 1, plugin_namespace:
[08/10/2020-14:16:52] [E] [TRT] INVALID_ARGUMENT: getPluginCreator could not find plugin NonZero version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[08/10/2020-14:16:52] [E] Failed to parse onnx file
[08/10/2020-14:16:52] [E] Parsing model failed
[08/10/2020-14:16:52] [E] Engine creation failed
[08/10/2020-14:16:52] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=ssd_frozen.onnx --verbose

Hi All, @AastaLLL,

I am facing the same problem. I have trained the SSD_mobilenet_v2_coco model on my own dataset and converted it to onnx using tf2onnx --opset 11.

The onnx file creation works fine. But when I try to run this on DeepStream 5.0, I get the UINT8 error when it tries to make the engine file.

Even the onnx file converted directly from the pretrained model give the UINT8 error.

Any help would be welcome!

Hi, anujfulari

We try your model but meet some issue in the output dimension.

May I know the class number of your model?
We try the default 91 value and cannot match the dimension.

Thanks.

Hi @AastaLLL,
Number of classes in the model is 3 excluding background class.

Hi, anujfulari

We can run your model with the configure file shared in this topic but update the class number into 4.

diff --git a/config.py b/config.py
index 499a605..444af99 100644
--- a/config.py
+++ b/config.py
@@ -36,7 +36,7 @@ NMS = gs.create_plugin_node(name="NMS", op="NMS_TRT",
     nmsThreshold=0.6,
     topK=100,
     keepTopK=100,
-    numClasses=3,
+    numClasses=4,
     inputOrder=[0, 2, 1],
     confSigmoid=1,
     isNormalized=1)
$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o sample_ssd_relu6.uff -O NMS -p config.py
$ /usr/src/tensorrt/bin/trtexec --uff=./sample_ssd_relu6.uff --uffInput=Input,3,300,300 --output=NMS

Please give it a try and let us know the following.
Thanks.

Thank you @AastaLLL, for your help.
It successfully worked using uff model. I have tried it. But it gives error when we try to use ONNX method.
We were unable to convert ONNX model to TensorRT engine.

Thank you again.