How can we know we have convert the onnx to int8trt rather than Float32?

@spolisetty thank you very much for help me so many times,Now i am trying to do tensorrt QAT,Could you tell me ,where i can find some example?

Hi @530869411,

Hope following will help you,
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#work-with-qat-networks

Thank you.

@spolisetty Does tensor support RT ?or tensorRT only have postQT

Hi @530869411,

Looks like we have same query on new thread. Please follow up here

Thank you.