TFTRT int8 conversion is not working.

2019-02-01 06:35:46.554212: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:193] Starting Calib Conversion
2019-02-01 06:35:46.733967: W tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:199] Construction of static int8 engine is not implemented yet!. Dynamic engine will be constructed
2019-02-01 06:35:47.901680: I tensorflow/compiler/tf2tensorrt/utils/trt_resources.cc:25] Destroying Calibration Resource 
 Calibrator = 0x7f0658001110
 Builder    = 0
 Engine     = 0x7f0800033a20
 Logger     = 0x7f0658008708
 Allocator  = 0x7f0658008760
 Thread     = 0x7f0658008740

2019-02-01 06:36:13.455979: I tensorflow/compiler/tf2tensorrt/utils/trt_resources.cc:25] Destroying Calibration Resource 
 Calibrator = 0x7f0648020ac0
 Builder    = 0
 Engine     = 0x7f080584ed00
 Logger     = 0x7f0648053a08
 Allocator  = 0x7f064801e7b0
 Thread     = 0x7f0648007740

2019-02-01 06:36:14.412340: I tensorflow/compiler/tf2tensorrt/utils/trt_resources.cc:25] Destroying Calibration Resource 
 Calibrator = 0x7f0698008e20
 Builder    = 0
 Engine     = 0x7f055c0c7e30
 Logger     = 0x7f0698008738
 Allocator  = 0x7f0698007bb0
 Thread     = 0x7f0698008ed0

pure virtual method called
terminate called without an active exception
Aborted (core dumped)

python3 tensorrt.py --frozen_graph=frozen_inference_graph.pb -bs 1 -if 1-1.jpeg --int8

https://github.com/tensorflow/models/blob/master/research/tensorrt/tensorrt.py

I’m trying to convert tensorflow object detection graph(Faster RCNN) to tensorRT int 8.
(http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_2018_01_28.tar.gz)
Other presicions are converted but int8 is not working.
The error appears when I call this method “calib_graph_to_infer_graph” in “import tensorflow.contrib.tensorrt as trt”.
How can I fix this?

I’m using tf-nightly-gpu (tf-nightly-gpu 1.13.0.dev20190130).
cuda version is 10 (Cuda compilation tools, release 10.0, V10.0.130)
Ubuntu 16.04
TensorRT 5.0.2
python 3.5

Hello,

what type of GPU are you using? The following table lists NVIDIA hardware and which precision modes each hardware supports.

https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#hardware-precision-matrix