inception V2 conversion tensorrt tensorflow

Hi,
I’m trying to use an inception_v2 model frozen in tensorflow with tensorrt.
I’m on a Ubuntu 16.04
cuda 9.0
cudnn 7.1
tensorrt 4.0
The model has been frozen using tf 1.10

I used bazel to check my model:

bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=~/Models/tf_generic/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb
Found 1 possible inputs: (name=image_tensor, type=uint8(4), shape=[?,?,?,3]) 
No variables spotted.
Found 4 possible outputs: (name=detection_boxes, op=Identity) (name=detection_scores, op=Identity) (name=num_detections, op=Identity) (name=detection_classes, op=Identity) 
Found 25024651 (25.02M) const parameters, 0 (0) variable parameters, and 1540 control_edges
Op types used: 1938 Const, 549 Gather, 452 Minimum, 360 Maximum, 305 Reshape, 197 Sub, 185 Cast, 183 Greater, 180 Split, 180 Where, 122 Mul, 121 StridedSlice, 118 ConcatV2, 117 Shape, 115 Pack, 105 Add, 94 Unpack, 93 Slice, 92 Squeeze, 92 ZerosLike, 90 NonMaxSuppressionV2, 89 Conv2D, 89 BiasAdd, 77 Relu6, 29 Identity, 29 Switch, 26 Enter, 15 RealDiv, 14 Merge, 13 Tile, 12 Range, 11 TensorArrayV3, 9 ExpandDims, 8 NextIteration, 7 AvgPool, 6 TensorArrayWriteV3, 6 Exit, 6 TensorArraySizeV3, 6 TensorArrayGatherV3, 5 TensorArrayReadV3, 5 TensorArrayScatterV3, 5 MaxPool, 4 Fill, 3 Transpose, 3 Assert, 2 Equal, 2 Exp, 2 Less, 2 LoopCond, 1 DepthwiseConv2dNative, 1 Size, 1 Sigmoid, 1 TopKV2, 1 ResizeBilinear, 1 Placeholder
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=~/Models/tf_generic/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --show_flops --input_layer=image_tensor --input_layer_type=uint8 --input_layer_shape=-1,-1,-1,3 --output_layer=detection_boxes,detection_scores,num_detections,detection_classes

This is giving me the following output nodes: detection_boxes,detection_scores,num_detections,detection_classes

I wrote the following code:

import tensorrt as trt
import uff
from tensorrt.parsers import uffparser
import pycuda as cuda

# Other imports
import numpy as np
from imutils.video import WebcamVideoStream

G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.INFO)

# Load tensorflow model in TRT
uff_model = uff.from_tensorflow_frozen_model("~/Models/tf_generic/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb", ['detection_boxes','detection_scores','num_detections','detection_classes'] )

This is raising the following error:

WARNING:tensorflow:From /usr/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py:146: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
Using output node detection_boxes
Using output node detection_scores
Using output node num_detections
Using output node detection_classes
Converting to UFF graph
Traceback (most recent call last):
  File "~/Models/TMP_NVIDIA/trt_test.py", line 17, in <module>
    uff_model = uff.from_tensorflow_frozen_model("~/Models/tf_generic/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb", ['detection_boxes','detection_scores','num_detections','detection_classes'] )
  File "/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 149, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 120, in from_tensorflow
    name="main")
  File "/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py", line 76, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py", line 53, in convert_tf2uff_node
    raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
uff.model.exceptions.UffException: detection_classes was not found in the graph. Please use the -l option to list nodes in the graph.

This is quite disapointing… Any idea ?

Regards

Hello,

It’d help us debug if you can share the .pb?

Hi,

The .pb can be found on tensorflow inception v2 repository, I have the same issue both with the original and retrained model.

Regards

Hello,

Per Engineering, this model will not work out of the box with TensorRT as it has many unsupported operations.

Recommend use the -l option to list all the nodes in the graph and find out the name of the output nodes if detection_classes is not among them

If you are using the converter from 5.0 GA it is not required to provide the names of the output nodes. https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/uff.html

Recommend referencing sampleUffSSD to get an idea of how to convert this model.

Hi,

Thank you for your answer.
I’m a little bit disapointed by it though. I thought tensorrt based (at least partially) inception v2 as https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#uffssd_overview says:

“The sampleUffSSD is based on the TensorFlow implementation of SSD. For more information, see ssd_inception_v2_coco.”

Can you clarify it for me ?

Regards

Magaly

did you happen to solve the issue??
I have the same issue with ssd inception v2 custom trained model.