TensorRT with custom plugin. [libprotobuf FATAL]

Dear everyone,

Platform details:

linux18.04 LTS
GPU: GTX1079
nvidia driver:390.77
cuda: 9.0
cudnn: 9.0-v7.4
tensorflow: 1.12.0
tensorrt: 5.0


I’m trying to transform my tensorflow model to tensorrt model with some custom layer.

I am pretty new to tensorrt.

So I just made some quite easy plugins to replace for the unsupported layers.
There are also some problems with these plugins, such as incorrect format, shapes, batches and so on.

I just want to prove that the way I use the plugin is corjavascript:void(0);rect.

But when I try to parse the .uff file.
I get a libprotobuf fatal error:

  • [b][libprotobuf FATAL /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/externals/protobuf/x86_64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_): terminate called after throwing an instance of 'google_private::protobuf::FatalException' what(): CHECK failed: (index) < (current_size_): Aborted (core dumped)[/b]

So will an incorrect custom plugin will lead to the error like this?
If so, which is the key point, and how to solve?

Any help would be appreciated.

Hello,

Incorrect custom plugins will result in undefined behaviors. Please reference the samplePlugin example for details on howto define a Custom layer

https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#plugin_sample

Thanks for your reply,

And I have modified my code. But there are still some problems.

I test the sampleUffSSD.cpp code. The order of the invocation of IPluginV2 and IPluginCreator ways when the parser parsing the uff file are as below:

Begin parsing model...
FlattenConcatPluginCreator: getPluginName
FlattenConcatPluginCreator: getFieldNames
FlattenConcatPluginCreator: createPlugin
FlattenConcat: FlattenConcat0
FlattenConcat: setPluginNamespace
FlattenConcat: getNbOutputs
FlattenConcat: clone
FlattenConcat: FlattenConcat1
FlattenConcat: getNbOutputs
FlattenConcat: getOutputDimensions
FlattenConcatPluginCreator: getFieldNames
FlattenConcatPluginCreator: createPlugin
FlattenConcat: FlattenConcat0
FlattenConcat: setPluginNamespace
FlattenConcat: getNbOutputs
FlattenConcat: clone
FlattenConcat: FlattenConcat1
FlattenConcat: getNbOutputs
End parsing model...

And my custom plugin is:

Begin parsing model...
DeblurInputPluginCreator: getPluginName
Resize_TRT_PluginCreator: getPluginName
StopGradientPluginCreator: getPluginName
DeblurInputPluginCreator: getFieldNames
DeblurInputPluginCreator: createPlugin
DeblurInput: DeblurInput0
DeblurInput: setPluginNamespace
DeblurInput: getNbOutputs
DeblurInput: clone
DeblurInput: DeblurInput2
DeblurInput: getNbOutputs
[libprotobuf FATAL /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/externals/protobuf/x86_64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of 'google_private::protobuf::FatalException'
  what():  CHECK failed: (index) < (current_size_):
Aborted (core dumped)

My getNbOutputs() is just return 4;

Why is 4?
I get it from the tensorflow graph generated by tensorboard.

So can you tell me more detail about the use of getNbOutputs() and which check will be made?

More details:

When I make the getNbOutputs() to return -1. This error will be passed, but another error raises.

Begin parsing model...
DeblurInputPluginCreator: getPluginName
Resize_TRT_PluginCreator: getPluginName
StopGradientPluginCreator: getPluginName
DeblurInputPluginCreator: getFieldNames
DeblurInputPluginCreator: createPlugin
DeblurInput: DeblurInput0
DeblurInput: setPluginNamespace
DeblurInput: getNbOutputs
DeblurInput: clone
DeblurInput: DeblurInput2
DeblurInput: getNbOutputs
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

Thanks a lot.

Hi, I have the same error,

[libprotobuf FATAL /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/externals/protobuf/x86_64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of 'google_private::protobuf::FatalException'
what(): CHECK failed: (index) < (current_size_):
Aborted (core dumped)

could you please tell me how to solve it? Thanks

Hello,

I think it is caused by the difference of dimension of tensor between tensorflow graph and tensorRT graph.

Which leads to my fatal error is the dimension of input tensor. It should be [3, 256, 256], but I have set to [1, 3, 256, 256].

Please check all the dimensions of input and output tensor in the graph.

hi, I meet the same error.did you fix it?

I solved similar problem ,refenrence:

https://devtalk.nvidia.com/default/topic/1069027/tensorrt/parsing-gridanchor-op-gridanchor_trt-protobuf-repeated_field-h-1408-check-failed-index-lt-current_size-/?offset=3#5415537

I have got this problem with sampleUffSSD.
I am using ssd_inception_v2_coco_2018_01_28 model to train my dataset in tensorflow.

I have tested sampleUFFSSD.cpp with ssd_inception_v2_coco_2017_11_17 mdoel.

It worked perfectly. No problems.

But I want to use ssd_inception_v2_coco_2018_01_28 to run inference using sampleUffSSD.cpp.

It is getting converted to UFF. :
python3 convert_to_uff.py frozen_inference_graph.pb -O NMS -p config.py
Loading frozen_inference_graph.pb
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.
WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.
UFF Version 0.6.5
=== Automatically deduced input nodes ===
[name: “Input”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “shape”
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:96] Marking [‘NMS’] as outputs
No. nodes: 810
UFF Output written to sample_ssd_relu6.uff

But then when i run sampleUffSSD.cpp, I get this error.:
./sample_uff_ssd
&&&& RUNNING TensorRT.sample_uff_ssd # ./sample_uff_ssd
[05/30/2020-13:28:02] [I] Building and running a GPU inference engine for SSD
[libprotobuf FATAL /externals/protobuf/aarch64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of ‘google_private::protobuf::FatalException’
what(): CHECK failed: (index) < (current_size_):
Aborted (core dumped)