An error happened when running the calibration datas:
tensorflow/contrib/tensorrt/kernels/trt_engine_op.cc:223] Check failed: t.TotalBytes() == device_tensor->TotalBytes() (2220 vs. 2192)
Aborted (core dumped)
what causes this problem? running with the FP32 inference graph is OK.
If you open a TensorRT issue, here is our policy.
TensorRT developers respond to issues. We want to focus on fixing bugs and adding features.
Provide details on the platforms you are using:
Linux distro and version
nvidia driver version
Python version [if using python]
If Jetson, OS, hw versions
Describe the problem
Include any logs, source, models (uff, pd) that would be helpful to diagnose the problem.
If relevant, please include the full traceback.
Try to provide a minimal reproducible test case.
I ran into the same error, although while doing a different thing.
Linux distro and version: Ubuntu 16.04
GPU type: GTX 1070 with Max-Q
nvidia driver version: 418.56
CUDA version: 10.1
CUDNN version: 22.214.171.124
Python version: 3.52
Tensorflow version: 1.13.1
TensorRT version: 126.96.36.199
This occurs when within the same code, I convert more than one graph using trt.create_inference_graph(), followed by tf.import_graph_def(trt_graph), and then running an input through the first graph, then the second graph.