Assertion `d.nbDims >= 3' failed with INT8 mode

Provide details on the platforms you are using:
Linux distro and version Ubuntu 16.04
GPU type 1080ti
nvidia driver version 396.54
CUDA version 9
CUDNN version 7.1
Python version [if using python] 3.5
Tensorflow version r1.11
TensorRT version 4.0.1.6
If Jetson, OS, hw versions n/a

Describe the problem

2018-09-25 17:20:14.600365: I tensorflow/contrib/tensorrt/kernels/trt_engine_op.cc:577] Starting calibration thread on device 0, Calibration Resource @ 0x7f7fb4001730
2018-09-25 17:20:14.600539: I tensorflow/contrib/tensorrt/kernels/trt_engine_op.cc:577] Starting calibration thread on device 0, Calibration Resource @ 0x7f7fac071a40
2018-09-25 17:20:14.973555: I tensorflow/contrib/tensorrt/kernels/trt_engine_op.cc:577] Starting calibration thread on device 0, Calibration Resource @ 0x7f7fa4004fe0
python: helpers.cpp:56: nvinfer1::DimsCHW nvinfer1::getCHW(const nvinfer1::Dims&): Assertion `d.nbDims >= 3' failed.
Aborted (core dumped

)

How to reproduce

import tensorflow as tf
from tensorflow.contrib import tensorrt as trt

def load_graph(frozen_graph_filename):
    # We load the protobuf file from the disk and parse it to retrieve the
    # unserialized graph_def
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    # Then, we can use again a convenient built-in function to import a graph_def into the
    # current default Graph
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(
            graph_def,
            name='', #DEBUG
        )
    return graph

fid = "model.pb"
output_nodenames = 'out1,out2,out3'
output_node = list(output_nodenames.split(","))
g = load_graph(fid)
with tf.Session(graph=g) as sess:
    trt_graph = trt.create_inference_graph(
    input_graph_def=tf.get_default_graph().as_graph_def(),
    outputs=output_node,
    max_batch_size=99999,
    max_workspace_size_bytes=1 << 25,
    precision_mode="INT8",  # TRT Engine precision "FP32","FP16" or "INT8"
    minimum_segment_size=2  # minimum number of nodes in an engine
    )
    with tf.gfile.GFile("trt.pb", "wb") as f:
        f.write(trt_graph.SerializeToString())

g2 = load_graph("trt.pb")
with tf.Session(graph=g2) as sess:
    """Run given calibration graph multiple times."""
    num_samples = 10
    np.random.seed(0)
    ip1_data = np.random.rand(num_samples,700,800,6).astype(np.float32)
    ip1 = g2.get_tensor_by_name("ip1:0")

    ip2_data = np.random.rand(4).astype(np.float32)
    ip2 = g2.get_tensor_by_name("ip2:0")

    ip3_data = np.random.rand(20000,6).astype(np.float32)
    ip3 = g2.get_tensor_by_name("ip3:0")

    ip4_data = np.random.rand(20000,4).astype(np.float32)
    ip4 = g2.get_tensor_by_name("ip4:0")

    out1 = g2.get_tensor_by_name("out1:0")
    out2 = g2.get_tensor_by_name("out2:0")
    out3 = g2.get_tensor_by_name("out3:0")
# run over real calibration data here, we are mimicking a calibration set of
# 30 different batches. Use as much calibration data as you want
    for i in range(num_samples):
        val = sess.run([out1, out2, out3], feed_dict={ip1:ip1_data[i], ip2:ip2_data, ip3:ip3_data, ip4:ip4_data})

Related Issue: https://devtalk.nvidia.com/default/topic/1038306/tensorrt/nvinfer1-dimschw-nvinfer1-getchw-const-nvinfer1-dims-amp-assertion-d-nbdims-gt-3-failed-/?offset=2#5286167

Hello,

can you share the model.pb and trt.pb you are using? It’d help us debug your issue.

@NVES Please see DM

Is there any solution for this problem yet? I got the same error with exactly the same setting.

No, still waiting for a fix. Its still an open bug.

Thank you for your response!
Did you tried it with TenosrRT 5.x.x.x?

I have tried all possible variations of TF and TRT versions, nothing works. This is a bug with no workaround that i could figure out, nor could the engineers at NVIDIA as of now.

Thank you very much!
Thanks to you I don’t have to try it.
Have a nice evening :)

Can someone explain why does this error occur? I’m having the same problem. Thanks !

Honestly I can’t. I found the the variable “nbDims” (without “d.”) in the “cuDNN Library.pdf”. This variable is:
“Output. Actual number of dimensions of the tensor will be returned in nbDims[0].”

Maybe there happen some strange things after calling the trt.create_inference_graph(…, precision_mode=“INT8”) function with your calculation tree, so that tensor dimensions get lost… but I don’t think so.

The error occurred only by doing inference for calibration, so that the internal calibration function could be the cause of error.

If I am wrong, please correct me.

Check this out,
https://github.com/tensorflow/tensorflow/issues/22514