I have created my model to detect a red box (using my own small dataset, only 30 pictures) using Tensorflow Object Detection API. Therefore I have get my frozen graph. I am using ssd_inception_v2_coco model. I also still have the model.ckpt file.
Now, I would like to make the tensor RT engine in order to run that model like the object detection example that is provided in Jetbot (jetson nano based). Here are the snippets of the code of the object detection example.
May I know that you want to use TF-TRT or pure TensorRT for the inference?
The sample you shared is actually TF-TRT, a TRT wrapper inside the TensorFlow frameworks.
For a better performance, it’s recommended to convert your model into pure TensorRT instead.
You can start from the following tutorial which also has a ssd_inception_v2_coco sample:
[url]https://github.com/AastaNV/TRT_object_detection[/url]
What I want is create an engine like the example code in Jetbot: https://github.com/NVIDIA-AI-IOT/jetbot/tree/master/notebooks/object_following (see the screenshot in my first post). So I just need to replace that column with my engine. I do not know it is a TF-TRT or pure Tensor RT. I need my Jetbot to detect a red square, therefore I can’t use the pure pre-trained model. Actually I don’t care the type of pre-trained model (at least it work!), I only want to detect a red square. Therefore, how do i achieve that?
The error happen in the when I run “python3 main.py [image]” command.
Here is the log:
2019-11-18 10:40:55.569771: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py:18: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/graphsurgeon/_utils.py:2: The name tf.NodeDef is deprecated. Please use tf.compat.v1.NodeDef instead.
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/graphsurgeon/StaticGraph.py:125: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.
WARNING: To create TensorRT plugin nodes, please use the `create_plugin_node` function instead.
NOTE: UFF has been tested with TensorFlow 1.12.0. Other versions are not guaranteed to work
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.3
=== Automatically deduced input nodes ===
[name: "Input"
op: "Placeholder"
input: "image_tensor:0"
attr {
key: "dtype"
value {
type: DT_FLOAT
}
}
attr {
key: "shape"
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]
=========================================
Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:179: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Traceback (most recent call last):
File "main.py", line 31, in <module>
uff_model = uff.from_tensorflow(dynamic_graph.as_graph_def(), model.output_name, output_filename='tmp.uff')
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 181, in from_tensorflow
debug_mode=debug_mode)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 94, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 72, in convert_tf2uff_node
inp_node = tf_nodes[inp_name]
KeyError: 'image_tensor'
Thank you. it works when I use the pure ‘ssd_mobilenet_v2_coco’ from the tensorflow object detection api zoo. When I try to use my trained model, it does not work. Here is my trained model: model.zip - Google Drive (I know the number of step is still small, but here I just want to try does the conversion to engine work or not)
here are my software version:
Tensorflow 1.14.0
JetPack 4.2.2
TensorRT 5.1.6-1+cuda10.0
error message:
UFF Text Output written to /home/jetbot/Notebooks/test_object_detection/tensorrt_demos/ssd/frozen_inference_graph.pbtxt
[TensorRT] ERROR: UffParser: Validator error: FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128_depthwise/BatchNorm/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File "build_engine.py", line 216, in <module>
main()
File "build_engine.py", line 210, in main
buf = engine.serialize()
AttributeError: 'NoneType' object has no attribute 'serialize'
I also can not find file tmp_xxx.pbtxt in the exported_model folder after running the export.sh
Pay special attention to input_order in the previous step. You could verify it by checking the tmp_xxx.pbtxt debug file. Look at the ‘NMS’ node and verify the order of its 3 input tensors.
The problem you’ve encountered might be due to some later changes in the object detection API (aka https://github.com/tensorflow/models). But I cannot be sure. By the way, are you using tensorflow 1.12.x for exporting the frozen graph and converting pb to uff?
Otherwise, if you’d like to have the frozen_inference_graph.pb, I could send the file to you.
sorry, it should be a misunderstand. I can convert the model and get the pb file, the problem is i do not know how to use the build_engine.py because i can not find the tmp_xxx.pbtxt file in the exported_model folder in order to know the 3 inputs tensor input.
I am in a similar situation. I have using a Jetson Nano with jetpack 4.3
I have a frozen graph, but in my case I’m trying to do transfer learning from v2 mobilenet and detect 8 different objects.
I’ve tried running jkjung13’s build_engine.py but that gives me an error.
I copied the ssd_mobilenet_v2_coco MODEL_SPECS and simply renamed it to mymodel.
I updated the num_classes to 8.
I seem to be able to produce a uff file, there are some errors during that, but it produces the file.
Then it stops and claims there is no serializer. But it is for NoneType. Some something about this line:
engine = builder.build_cuda_engine(network) is returning a NoneType.
No. nodes: 1094
UFF Output written to /home/team5607/NanoVision5607/transferLearning/tmp_model.uff
UFF Text Output written to /home/team5607/NanoVision5607/transferLearning/tmp_model.pbtxt
[TensorRT] ERROR: UffParser: Parser error: BoxPredictor_0/Reshape: Reshape: Volume mismatch. Note: Initial Volume = 4332, New Volume = 3072
WTF
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File “./build_engine.py”, line 231, in
main()
File “./build_engine.py”, line 225, in main
buf = engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’
[TensorRT] ERROR: UffParser: Parser error: BoxPredictor_0/Reshape: Reshape: Volume mismatch. Note: Initial Volume = 4332, New Volume = 3072
[TensorRT] ERROR: Network must have at least one outputI tried that, same error.
<class ‘tensorrt.tensorrt.INetworkDefinition’>
WTF
Traceback (most recent call last):
File “./build_engine.py”, line 232, in
main()
File “./build_engine.py”, line 226, in main
buf = engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’