RuntimeError: CHECK failed: (index) < (current_size_):

platform:
linux16.04
K2200
cuda10.0
cudnn7.6.5
python 3.6
tensorflow 1.12
tensorrt 6.0.1

problem:
By using the code “GitHub - AastaNV/TRT_object_detection: Python sample for referencing object detection model with TensorRT
Set “from config import model_ssd_mobilenet_v2_coco_2018_03_29 as model”
using the official model from “https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md”, model is “ssd_mobilenet_v2_coco”. it gets right result.
But using the above model to retrain the model under our data, get a frozen model;
By using the frozen model, got the following error:
python3 main.py …/image1.jpg

[TensorRT] ERROR: Could not register plugin creator:  FlattenConcat_TRT in namespace: 
WARNING: To create TensorRT plugin nodes, please use the `create_plugin_node` function instead.
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.5
=== Automatically deduced input nodes ===
[name: "Input"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: 1
      }
      dim {
        size: 3
      }
      dim {
        size: 300
      }
      dim {
        size: 300
      }
    }
  }
}
]
=========================================

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
DEBUG [/usr/local/lib/python3.6/site-packages/uff/converters/tensorflow/converter.py:104] Marking ['NMS'] as outputs
No. nodes: 675
UFF Output written to tmp.uff
[libprotobuf FATAL /externals/protobuf/x86_64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_): 
Traceback (most recent call last):
  File "main.py", line 40, in <module>
    parser.parse('tmp.uff', network)
RuntimeError: CHECK failed: (index) < (current_size_):

model file is at: “frozen_inference_graph.pb - Google Drive

can you test my pb file ?

Thanks.

Hi,

You may need to update the configure file for your model, eg. input dimension or class.

Please refer to below example:
https://github.com/AastaNV/TRT_object_detection/blob/master/config/model_ssd_mobilenet_v2_coco_2018_03_29.py

Thanks

Hi,

I met this issue and this is the way I solve it.

This issue is possible come from this parameter “numClasses” in “NMS create plugin node”.

As I know so far, parameter numClasses = “your num_classes” + 1. The plus 1 is for background class.

https://devtalk.nvidia.com/default/topic/1069027/tensorrt/parsing-gridanchor-op-_gridanchor_trt-protobuf-repeated_field-h-1408-check-failed-index-lt-current_size_-/?offset=3#5415537

I meet same issue and my model is ssd_mobilenet_v1, the parameter “numClasses” in "NMS create plugin node has changed to “my num_classes” + 1, and the error not solved.