Can Jetson nano import int8?

i use the follow code to complete onnx to int8 use tensorrt*

onnx_to_tensorrt_int8.py (5.4 KB)
but when i use
test.py (923 Bytes)
to test it’s still float32
i don’t know whether i convert successful?

I think that Jetson Nano only works with FP16 and not with INT8 neither FP32.

According to this, try to replace int8 to fp16 in your onnx_to_tensorrt_int8.py file.
But ensure that build_int8_engine function name is not changeg because I’m not sure if build_fp16_engine exists.

Good luck!

Missing dynamic range for tensor squeeze_after_(Unnamed Layer* 123) [Activation]_out_tensor, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
when i am trying to convert other model i met the question

Hi,

INT8 inference needs Tensor Core hardware, which is only available on the GPU architecture > 7.x Jetson.
For Nano, please use FP32 or FP16 instead.

You can find the detailed support matrix below:
(Nano is GPU architecture = 5.3)

Thanks.

1 Like

when i am trying to convert other model i met the question
Missing dynamic range for tensor squeeze_after_(Unnamed Layer* 123) [Activation]_out_tensor, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[/quote]

Hi,

The warning indicates that the calibration file doesn’t contains the data for all layers.
Some layers cannot find the corresponding calibration value and raise the warning.

Do you use the same model to generate the calibration cache?
If not, please do so since this is related to the inference accuracy.

Thanks.