I’m trying to transform my tensorflow model to tensorrt model with some custom layer.
I am pretty new to tensorrt.
So I just made some quite easy plugins to replace for the unsupported layers.
There are also some problems with these plugins, such as incorrect format, shapes, batches and so on.
I just want to prove that the way I use the plugin is corjavascript:void(0);rect.
But when I try to parse the .uff file.
I get a libprotobuf fatal error:
[b][libprotobuf FATAL /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/externals/protobuf/x86_64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of 'google_private::protobuf::FatalException'
what(): CHECK failed: (index) < (current_size_):
Aborted (core dumped)[/b]
So will an incorrect custom plugin will lead to the error like this?
If so, which is the key point, and how to solve?
I have got this problem with sampleUffSSD.
I am using ssd_inception_v2_coco_2018_01_28 model to train my dataset in tensorflow.
I have tested sampleUFFSSD.cpp with ssd_inception_v2_coco_2017_11_17 mdoel.
It worked perfectly. No problems.
But I want to use ssd_inception_v2_coco_2018_01_28 to run inference using sampleUffSSD.cpp.
It is getting converted to UFF. :
python3 convert_to_uff.py frozen_inference_graph.pb -O NMS -p config.py
Loading frozen_inference_graph.pb
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.
WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.
UFF Version 0.6.5
=== Automatically deduced input nodes ===
[name: “Input”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “shape”
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]
Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:96] Marking [‘NMS’] as outputs
No. nodes: 810
UFF Output written to sample_ssd_relu6.uff
But then when i run sampleUffSSD.cpp, I get this error.:
./sample_uff_ssd
&&&& RUNNING TensorRT.sample_uff_ssd # ./sample_uff_ssd
[05/30/2020-13:28:02] [I] Building and running a GPU inference engine for SSD
[libprotobuf FATAL /externals/protobuf/aarch64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of ‘google_private::protobuf::FatalException’
what(): CHECK failed: (index) < (current_size_):
Aborted (core dumped)