How far can inference performance be improved through int8 calibration in jetson Nano?

Hello,

How far can inference performance be improved through int8 calibration in Nano?

Thank you.

Hi,

Sorry that INT8 inference is not supported on Jetson Nano.
It requires GPU architecture > 7.x.

Here is the details for your reference:
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#hardware-precision-matrix

Thanks.

1 Like