TensorRT Yolo Int8 on TITAN RTX

Hi
I have some problem of convert yolov3.pb model to INT8 with tensorRT. I download yolov3.weight from https://pjreddie.com/media/files/yolov3.weights and convert to pb format, then convert “*.pb” model to INT8 with tensorRT-5.1.5.
I’m using cuda10.0, tensorflow1.15
The OS is Ubantu16.04.

my code as follows:
from tensorflow.python.compiler.tensorrt import trt_convert as trt
if trt_precision_mode ==‘INT8’:

    calib_images_dir='/workspace/data/coco-2017/images/val2017'

    num_calib_images = 16
    calib_batch_size = 8
    calib_image_shape = (256,256)
    image_paths = glob.glob(os.path.join(calib_images_dir, '*.jpg'))
    image_paths = image_paths[:num_calib_images]
    num_batches = len(image_paths) // calib_batch_size

    def feed_dict_fn():
        # read batch of images
        batch_images = []
        for image_path in image_paths[feed_dict_fn.index:feed_dict_fn.index + calib_batch_size]:
            image = _read_image(image_path, calib_image_shape)
            batch_images.append(image)
        feed_dict_fn.index += calib_batch_size
        return {'inputs' + ':0': np.array(batch_images)}

    feed_dict_fn.index = 0

    converter = trt.TrtGraphConverter(
        input_graph_def=graph_def,
        precision_mode=trt_precision_mode,
        nodes_blacklist=out_names,
        max_workspace_size_bytes=1<<30,
        minimum_segment_size=3,
        maximum_cached_engines=6,
        is_dynamic_op=True,
        use_calibration=True
        )
    trt_graph_def = converter.convert()

    trt_graph_def = converter.calibrate(
        fetch_names=out_names,
        num_runs=num_batches,
        feed_dict_fn=feed_dict_fn)
else:
    converter = trt.TrtGraphConverter(
        input_graph_def=graph_def,
        precision_mode=trt_precision_mode,
        nodes_blacklist=out_names)
    trt_graph_def = converter.convert()
return trt_graph_def

When I run the code, it produce the following errors:
TensorRT precision mode: INT8
Begin conversion.
terminate called after throwing an instance of ‘std::out_of_range’
what(): _Map_base::at
convert_to_trt.sh: line 23: 36879 Aborted (core dumped)

By the way, the model convert to fp16 works fine.

I have no idea to solve this problem. please give some hints.
Thanks for any help!