I want to inference caffe model with INT8 precision, what should I do

HI,
I want to inference caffe model with INT8 precision, but the Demo provided using onnx model. So I want to know wherever have a demo to study.

I also can’t understand why it need to set dynamic range for the tensor and set the output type of this layer

Thanks for your time.

is INT8 just supported by onnx model?

Hi,

No.
These are independent function. All model format can be inferenced with INT8.
You can simply use our trtexec binary to test different model format with different precision mode.

$ /usr/src/tensorrt/bin/trtexec --help

Thanks.