Onnx to int8trt issue

TensorRT] WARNING: Int8 support requested on hardware without native Int8 support, performance will be negatively affected.
[TensorRT] ERROR: Calibration failure occurred with no scaling factors detected. This could be due to no int8 calibrator or insufficient custom scales for network layers. Please see int8 sample to setup calibration correctly.
[TensorRT] ERROR: Builder failed while configuring INT8 mode.
Created engine success!
Saving TRT engine file to path F_Let8.trt…
Traceback (most recent call last):
File “trt_convertor.py”, line 78, in
main()
File “trt_convertor.py”, line 75, in main
ONNX2TRT(args,calib )
File “trt_convertor.py”, line 43, in ONNX2TRT
f.write(engine.serialize())
AttributeError: ‘NoneType’ object has no attribute ‘serialize’

Hi,

Please noted that INT8 inference requires Tensor Core hardware.
However, Nano doesn’t have Tensor Core.

You can find the detailed support matrix below:
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#hardware-precision-matrix

Thanks.

if nano don’t support int8, why nano have a int8_caffe_mnist sample?

Hi,

Same package is used for all the Jetson device.
Tensor Core can be found in our Xavier or XavierNX platform.

Thanks.

thank you