Do the onnx style model support int8 calibrate?
Yes, TRT support int 8 calibration of ONNX model. Please refer to below link:
In TRT 7, ONNX parser supports full-dimensions mode only. Your network definition must be created with the explicitBatch flag set (when using ONNX parser)
Please refer to below link for more details:
Excuse me, does this affect the results?
When I convert the Onnx model to Int8 model, It warning the "WARNING: Tensor xxx is uniformly zero: network calibration failed，Tensor xxx is uniformly zero: network calibration failed， "
I don’t think i should affect the results since it’s just a warning.
In case you are facing issues, please refer below link:
If issue persist, please share the script & model file along with below info so we can better help.
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version