Problem converting ONNX model to TensorRT Engine for SSD Mobilenet V2

Hi everyone,
I have trained SSD Mobilenet v2 model on my dataset. It got successfully converted to ONNX but, during converting ONNX model to TensorRT engine, it throws error due to unsupported datatype UINT8.
Is their any work around to generate tensorrt engine? If I have to retrain the model with supported datatype, how to change datatype of model from uint8 to supported one?

TensorRT version: 7.1

Thank You

because TRT cannot support UINT8 datatype. It means your model already used the uint8 datatype
https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/FoundationalTypes/DataType.html

Can I change UINT8 to INT32 in my model?

Hi,

What kind of frameworks do you use for training SSD Mobilenet v2?
If it is TensorFlow, would you mind to give this GitHub a try?

Thanks.

Hi @AastaLLL,

After executing main.py from suggested Github repository, I got the following error:

OSError: libnvinfer.so.5: cannot open shared object file: No such file or directory

My tensorrt version is 7.1

Hi,

The FlattenConcat is by default installed in the TensorRT 7.1.
So please comment the plugin library link here to skip the error:

Thanks.

Hi @AastaLLL,

Thank you for reply.

This Github Repo. worked for me for converting uff file to .bin engine file.
Still I am unable to convert onnx file to TensorRT engine file due to UINT8 datatype used in SSD MobileNet V2. Is there any way to generate engine file from onnx file for SSD MobileNet V2 by changing datatype from UINT8 to another one?

Hi,

UINT8 isn’t supported by TensorRT.

But we don’t meet this issue for SSD Mobilenet v2 before.
May I know how do you train and serialize your model?

Thanks.

Hi @AastaLLL,

I have trained the model on my desktop with Tensorflow-1.15 and generated the frozen graph. Then I transferred this frozen graph to the Jetson nano and converted it to Onnx model using following command:

python3 -m tf2onnx.convert --saved-model saved_model/ --output onnx_model.onnx --opset 11 --fold_const

And then converted ONNX to TensorRT engine through following code:

def build_engine(onnx_path, shape = [1,300,300,3]):

   """
   This is the function to create the TensorRT engine
   Args:
      onnx_path : Path to onnx_file. 
      shape : Shape of the input of the ONNX file. 
  """
   with trt.Builder(TRT_LOGGER) as builder, builder.create_network(1) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
       builder.max_workspace_size = (256 << 20)
       with open(onnx_path, 'rb') as model:
           parser.parse(model.read())
       network.get_input(0).shape = shape
       engine = builder.build_cuda_engine(network)
       return engine

def save_engine(engine, file_name):
   buf = engine.serialize()
   with open(file_name, 'wb') as f:
       f.write(buf)
def load_engine(trt_runtime, plan_path):
   with open(engine_path, 'rb') as f:
       engine_data = f.read()
   engine = trt_runtime.deserialize_cuda_engine(engine_data)
   return engine

Hi,

Would you mind to share the pb and onnx file with us so we can check it?
If you are not comfortable to share it public, please pass the link via direct message.

Thanks.

Hi, anujfulari

Have you fixed this issue?
If not, would you mind share the model file with us?

Thanks.

Hi, @AastaLLL,

Issue is not fixed. Actually I have shared model files with you through direct message

Hi,

Sorry for the missing.
We will try to reproduce this issue and update more information with you later.

Thanks.

I have sent the new link via Direct message

Hi,
I had the same problem. So here is what I did to get rid of the uint8 input:

  1. using graphsurgeon to find the Input Node (‘image_tensor’)

  2. Remove the node and insert a custom input node with type float32

  3. Depending on yout tensorflow Version: Finding the Cast/ToFloat Node, that casts from uint8 → float and replace the expected input type to float as well. Actually it should be possible to skip the cast/toFloat node entirely but that screwed up by network.

4.Write the modified graph to pb.
5. Use this pb to convert to onnx.
6. Parse the created onnx file.

The code I used for manipulating the graph.pb

from tensorflow.core.framework.tensor_shape_pb2 import TensorShapeProto
import graphsurgeon as gs
import numpy as np

graph = gs.DynamicGraph('/PATH/TO/FROZEN_GRAPH.pb')
image_tensor = graph.find_nodes_by_name('image_tensor')

print('Found Input: ', image_tensor)

cast_node = graph.find_nodes_by_name('Cast')[0] #Replace Cast with ToFloat if using tensorflow <1.15
print("Input Field", cast_node.attr['SrcT'])

cast_node.attr['SrcT'].type=1 #Changing Expected type to float
print("Input Field", cast_node.attr['SrcT'])


input_node = gs.create_plugin_node(name='InputNode', op='Placeholder', shape=(1,HEIGHT,WIDTH,3))

namespace_plugin_map = {

    'image_tensor': input_node
}


graph.collapse_namespaces(namespace_plugin_map)

graph.write('GRAPH_NO_UINT8.pb')
graph.write_tensorboard('tensorboard_log_modified_graph')

Later on when specifying the inputs for onnx-conversion you’ll have to replace “image_tensor:0” with “InputNode:0”

Hi,

Sorry for keeping you both waiting.

We are checking this issue internally.
Will update more information with you later.

Thanks.

Thank you @AastaLLL

Thank you @joel.oswald,
I’ll try your suggestion.

@anujfulari,

Did you happen to convert your model to onnx and then tensorrt(c++ version) using the above solution??

@joel.oswald,

I tried your solution. but could succeed.

I am using ssd inception v2 2017_!1_17 model from tensorflow.

SHould I use opset 8/ 9 or is it okay if i use 11 ??

Because when I use 11, I get the different error.

Did you happen to convert ssd inception to ONNX - tensorrt and run inference (c++ version).?