Hi,
Do the onnx style model support int8 calibrate?
Hi,
Yes, TRT support int 8 calibration of ONNX model. Please refer to below link:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-700/tensorrt-developer-guide/index.html#optimizing_int8_python
https://github.com/NVIDIA/TensorRT/tree/release/7.0/samples/opensource/sampleINT8API
Note:
In TRT 7, ONNX parser supports full-dimensions mode only. Your network definition must be created with the explicitBatch flag set (when using ONNX parser)
Please refer to below link for more details:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-release-notes/tensorrt-7.html
Thanks
Hi,
Excuse me, does this affect the results?
When I convert the Onnx model to Int8 model, It warning the "WARNING: Tensor xxx is uniformly zero: network calibration failed,Tensor xxx is uniformly zero: network calibration failed, "
Hi,
I don’t think i should affect the results since it’s just a warning.
In case you are facing issues, please refer below link:
https://devtalk.nvidia.com/default/topic/1057296/tensorrt/tensorrt-int8-calibration-failed-for-a-tensor-is-uniformly-zero-/post/5361177/#5361177
If issue persist, please share the script & model file along with below info so we can better help.
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version
Thanks