TensorRT int8 calibration error: IndexError: _Map_base::at

Hi, I’m trying to apply a Bisenetv2 model based on Python TensorRT 7. The model I’ve used is Bisenetv2, and it’s already converted from TensorFlow pb to onnx with tf2onnx. The model worked fine when I used FP32 and FP16 model.
However, when I use INT8, TensorRT outputs an error message “IndexError: _Map_base::at” when executing “builder.build_engine(network, config)”.

The following information is the setting in my computer:
- Linux distribution: Ubuntu 16.04
- GPU: 2080Ti
- Nvidia driver version: 440.33.01
- CUDA version: 10.2
- CuDNN version: 8.0.4
- Python version: Python 3.5.2
- TensorFlow version: 1.14.0
- onnx version: 1.6.0
- tf2onnx version: 1.5.4
- TensorRT version: 7.1.3.4

The attachment contains the code and the model I’ve used.
TensorRT7_bisenetv2_int8_issued.tar.gz (20.1 MB)

I inspect the dimension of the input image and the model, but still cannot figure out the problem. Can someone help? Thank you.

Hi @RahnRYHuang,

Could you please share complete error logs and confirm whether are you able to run with onnx-runtime.

Thank you.

Meet same problem, it’s ok when parameter is fp32, but get error message when parameter is int8.

[TensorRT] WARNING: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
T
Traceback (most recent call last):
  File "main.py", line 25, in <module>
    ONNX2TRT(args, calib=calib)
  File "trt_convertor.py", line 39, in ONNX2TRT
    engine = builder.build_cuda_engine(network)
IndexError: _Map_base::at

Hello, did u solve that problem?

Check if you loaded calibration set properly. In my case I was feeding model with empty calibration set (lenght = 0) so this error has appeared.

4 Likes

hi, RahnRYHuang
I have the same problem. did you solved it?