Hi,

I failed to use INT8 precision mode when using tf-trt on Jetson AGX Xavier, could anyone give me some advice?

I write the inference code for my own tensorflow models following the guide

https://github.com/tensorflow/tensorrt/blob/r1.14%2B/tftrt/examples/image-classification/TF-TRT-inference-from-saved-model.ipynb

It work well when I use FP32 and FP16 precision modes，but failed with INT8 precision mode.

The error shows:

2019-12-04 16:57:59.996310: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:733] Number of TensorRT candidate segments: 2

2019-12-04 16:58:00.038247: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0

2019-12-04 16:58:00.310446: F ./tensorflow/compiler/tf2tensorrt/convert/convert_nodes.h:296] Check failed: is_weights()

Aborted (core dumped)

And I find the error raised by using function **trt.create_inference_graph**.

trt_graph = trt.create_inference_graph(

input_graph_def=frozen_graph,

outputs=output_names,

max_batch_size=1,

max_workspace_size_bytes=1*(10**9),#1 << 25,

precision_mode= INT8,#FP32, FP16, INT8

minimum_segment_size=7)

So **does the package tf-trt of JetPack 4.2.2 support INT8 precision mode?** If yes, please help me to fix the problem above.

By the way, my model includes batchmatmul layer which is not supported by pure TensorRT, and writing a plugin is difficult to me, so I probably won’t use pure TensorRT.

SDK: JetPack 4.2.2

CUDA version: 10.0.326

Python version: 3.6.8

Tensorflow version: 1.14.0

TensorRT version: 5.1.6.1

Thanks.