Do quantization scales still need to be 0?

Hi, instead of pytorch-quantization, is it possible to use PyTorch’s own quantization libraries, export to ONNX->TensorRT? I’ve read somewhere that TensorRT does not support QDQ params with scales != 0? Is this still the case since PyTorch’s own quantization libraries will produce QDQ with scales!=0?

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!