[ASK] How to make tensor RT engine from frozen graph tensor flow?

I have created my model to detect a red box (using my own small dataset, only 30 pictures) using Tensorflow Object Detection API. Therefore I have get my frozen graph. I am using ssd_inception_v2_coco model. I also still have the model.ckpt file.

Now, I would like to make the tensor RT engine in order to run that model like the object detection example that is provided in Jetbot (jetson nano based). Here are the snippets of the code of the object detection example.

I have tried the tutorial from this link: https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt_401/tensorrt-api/python_api/workflows/tf_to_tensorrt.html
I follow the tutorial from the “Converting the TensorFlow Model to UFF” part. But I get error in this part:
engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20)

I also have tried this code from nVidia github repo: GitHub - NVIDIA-AI-IOT/tf_trt_models: TensorFlow models accelerated with NVIDIA TensorRT . I follow this tutorial from the “Build TensorRT / Jetson compatible graph” part. but i get error “segmentation fault” in this code part (optimize the model with tensor RT part)

trt_graph = trt.create_inference_graph(
    input_graph_def=frozen_graph,
    outputs=output_names,
    max_batch_size=1,
    max_workspace_size_bytes=1 << 25,
    precision_mode='FP16',
    minimum_segment_size=50
)

Is there any other tutorial that I can follow to create the engine???

Hi,

May I know that you want to use TF-TRT or pure TensorRT for the inference?
The sample you shared is actually TF-TRT, a TRT wrapper inside the TensorFlow frameworks.

For a better performance, it’s recommended to convert your model into pure TensorRT instead.
You can start from the following tutorial which also has a ssd_inception_v2_coco sample:
[url]https://github.com/AastaNV/TRT_object_detection[/url]

Please let us know the results.
Thanks.

Thank you for the answer.

What I want is create an engine like the example code in Jetbot: https://github.com/NVIDIA-AI-IOT/jetbot/tree/master/notebooks/object_following (see the screenshot in my first post). So I just need to replace that column with my engine. I do not know it is a TF-TRT or pure Tensor RT. I need my Jetbot to detect a red square, therefore I can’t use the pure pre-trained model. Actually I don’t care the type of pre-trained model (at least it work!), I only want to detect a red square. Therefore, how do i achieve that?

In the GitHub - AastaNV/TRT_object_detection: Python sample for referencing object detection model with TensorRT , do I need to change the coco.py file because I only need to detect a red square (1 class only)? after that, how do i do this: After that, TensorRT engine can be created directly with the serialized .bin file? The example engine in Jetbot is a file with an extension .engine not .bin: https://drive.google.com/open?id=110FbbZzmwIjRlCHl4LzQHURMqeR7w1Wo

@AastaLLL it does not work. I am using ssd_inception_v2 model, I get error KeyError: ‘image_tensor’

Hi,

Could you share more detail about the error?
Does it occur in building engine or inference?

By the way, the output TensorRT engine can be used in Jetbot directly.
They are all TensorRT serialized engine but with different naming rule.

Thanks.

@AastaLLL

The error happen in the when I run “python3 main.py [image]” command.

Here is the log:

2019-11-18 10:40:55.569771: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py:18: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/graphsurgeon/_utils.py:2: The name tf.NodeDef is deprecated. Please use tf.compat.v1.NodeDef instead.

WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/graphsurgeon/StaticGraph.py:125: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

WARNING: To create TensorRT plugin nodes, please use the `create_plugin_node` function instead.
NOTE: UFF has been tested with TensorFlow 1.12.0. Other versions are not guaranteed to work
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.3
=== Automatically deduced input nodes ===
[name: "Input"
op: "Placeholder"
input: "image_tensor:0"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: 1
      }
      dim {
        size: 3
      }
      dim {
        size: 300
      }
      dim {
        size: 300
      }
    }
  }
}
]
=========================================

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:179: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Traceback (most recent call last):
  File "main.py", line 31, in <module>
    uff_model = uff.from_tensorflow(dynamic_graph.as_graph_def(), model.output_name, output_filename='tmp.uff')
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 181, in from_tensorflow
    debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 94, in convert_tf2uff_graph
    uff_graph, input_replacements, debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 72, in convert_tf2uff_node
    inp_node = tf_nodes[inp_name]
KeyError: 'image_tensor'

I am using http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz model. Since in the main.py there is no ssd_mobilenet_v2_coco_2018_03_29, I change the model folder name to ssd_inception_v2_coco_2017_11_17 but inside the folder there are files from the link.

My TensorRT version 5.1.6

I created this python script: https://github.com/jkjung-avt/tensorrt_demos/blob/master/ssd/build_engine.py, for coverting tensorflow ssd models into tensorflow engines. ‘ssd_mobilenet_v2_coco’ is directly supported in the script, so no modification is required.

I also wrote a blog post explaining how it works: https://jkjung-avt.github.io/tensorrt-ssd/. Do check it out.

Thank you. it works when I use the pure ‘ssd_mobilenet_v2_coco’ from the tensorflow object detection api zoo. When I try to use my trained model, it does not work. Here is my trained model: model.zip - Google Drive (I know the number of step is still small, but here I just want to try does the conversion to engine work or not)

here are my software version:
Tensorflow 1.14.0
JetPack 4.2.2
TensorRT 5.1.6-1+cuda10.0

error message:

UFF Text Output written to /home/jetbot/Notebooks/test_object_detection/tensorrt_demos/ssd/frozen_inference_graph.pbtxt
[TensorRT] ERROR: UffParser: Validator error: FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128_depthwise/BatchNorm/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
  File "build_engine.py", line 216, in <module>
    main()
  File "build_engine.py", line 210, in main
    buf = engine.serialize()
AttributeError: 'NoneType' object has no attribute 'serialize'

I also can not find file tmp_xxx.pbtxt in the exported_model folder after running the export.sh

Pay special attention to input_order in the previous step. You could verify it by checking the tmp_xxx.pbtxt debug file. Look at the ‘NMS’ node and verify the order of its 3 input tensors.

@shalahuddinnn I tried to convert you trained model to frozen_inference_graph.pb with tensorflow 1.12.0 and this snapshot (https://github.com/tensorflow/models/tree/6518c1c7711ef1fdbe925b3c5c71e62910374e3e) of object detection API. As far as I can tell, it works. You could refer to my “hand-detection-tutorial” (https://github.com/jkjung-avt/hand-detection-tutorial) for that particular snapshot of object detection API.

The problem you’ve encountered might be due to some later changes in the object detection API (aka https://github.com/tensorflow/models). But I cannot be sure. By the way, are you using tensorflow 1.12.x for exporting the frozen graph and converting pb to uff?

Otherwise, if you’d like to have the frozen_inference_graph.pb, I could send the file to you.

thank you for you answer!

sorry, it should be a misunderstand. I can convert the model and get the pb file, the problem is i do not know how to use the build_engine.py because i can not find the tmp_xxx.pbtxt file in the exported_model folder in order to know the 3 inputs tensor input.

@shalahuddinnn You could just rename your ‘frozen_inference_graph.pb’ (since it also only outputs 1 class) to ‘ssd_mobilenet_v2_egohands.pb’, and do:

$ cd ${HOME}/project/tensorrt_demos/ssd
$ python3 build_engine.py ssd_mobilenet_v2_egohands

The ‘tmp_v2_egohands.pbtxt’ will be generated after that.

I am in a similar situation. I have using a Jetson Nano with jetpack 4.3
I have a frozen graph, but in my case I’m trying to do transfer learning from v2 mobilenet and detect 8 different objects.
I’ve tried running jkjung13’s build_engine.py but that gives me an error.

I copied the ssd_mobilenet_v2_coco MODEL_SPECS and simply renamed it to mymodel.
I updated the num_classes to 8.
I seem to be able to produce a uff file, there are some errors during that, but it produces the file.
Then it stops and claims there is no serializer. But it is for NoneType. Some something about this line:
engine = builder.build_cuda_engine(network) is returning a NoneType.

No. nodes: 1094
UFF Output written to /home/team5607/NanoVision5607/transferLearning/tmp_model.uff
UFF Text Output written to /home/team5607/NanoVision5607/transferLearning/tmp_model.pbtxt
[TensorRT] ERROR: UffParser: Parser error: BoxPredictor_0/Reshape: Reshape: Volume mismatch. Note: Initial Volume = 4332, New Volume = 3072
WTF
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File “./build_engine.py”, line 231, in
main()
File “./build_engine.py”, line 225, in main
buf = engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’

@mrHale, You need to set “num_classes” to 9: “background” + 8 other classes of objects.

[TensorRT] ERROR: UffParser: Parser error: BoxPredictor_0/Reshape: Reshape: Volume mismatch. Note: Initial Volume = 4332, New Volume = 3072
[TensorRT] ERROR: Network must have at least one outputI tried that, same error.
<class ‘tensorrt.tensorrt.INetworkDefinition’>
WTF
Traceback (most recent call last):
File “./build_engine.py”, line 232, in
main()
File “./build_engine.py”, line 226, in main
buf = engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’