Misc Error in createRegionScalesFromTensorScales when translate model to INT8

def input_fn(batch_size):
“”“mode is not used when use_synthetic is True”""
input_width, input_height = 512, 512
features = np.random.normal(
loc=0.5, scale=70,
size=(batch_size, input_height, input_width, 1)).astype(np.float32)
features = np.clip(features, 0.0, 255.0)
features = features /255.0
return features

with tf.Session() as sess:
with tf.gfile.GFile(tffilepath, ‘rb’) as f:
frozen_graph = tf.GraphDef()
frozen_graph.ParseFromString(f.read())
converter = trt.TrtGraphConverter(
input_graph_def=frozen_graph,
nodes_blacklist=[output],
max_batch_size=batch_size,
max_workspace_size_bytes=1 << 25,
precision_mode=precision.upper(),
minimum_segment_size=minimum_segment_size,
is_dynamic_op=use_dynamic_op
) # output nodes
trt_graph = converter.convert()

    trt_engine_ops = len([1 for n in trt_graph.node if str(n.op) == 'TRTEngineOp'])
    print("numb. of trt_engine_ops in trt_graph:", trt_engine_ops)
    all_ops = len([1 for n in trt_graph.node])

    all_ops = len([1 for n in trt_graph.node])
    print("numb. of all_ops in trt_graph:", all_ops)

    frozen_graph_ops = len([1 for n in frozen_graph.node if str(n.op) == 'TRTEngineOp'])
    print("numb. of trt_engine_ops in frozen_graph:", frozen_graph_ops)

    all_ops = len([1 for n in frozen_graph.node])
    print("numb. of all_ops in frozen_graph:", all_ops)

    if precision == 'INT8':
        def input_data():
            features_ = input_fn(batch_size)
            return {'Placeholder:0': features_}

        # INT8 calibration step
        print('Calibrating INT8...')
        trt_graph = converter.calibrate(
            fetch_names=["UNet/conv2d_23/BiasAdd"],
            num_runs= 10,
            feed_dict_fn=input_data)

        print('INT8 graph created.')

when i use this code to tanslate the model to INT8, when calicate , it turn out to be error as this, it make me puzzle,. it is the data generated error, or ?

Hi,

Can you provide the following information so we can better help?
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version

Also, if possible please share the script & model file to reproduce the issue.

Meanwhile, could you please try to use “trtexec” command to test the model.
“trtexec” useful for benchmarking networks and would be faster and easier to debug the issue.

Thanks