Runtime Performance Decreased while using int8 - tflite

Hi,
We are trying to run performance test for ResNet50 tflite based model on various hardwares. While running on Nano, as expected, we gain the runtime performance from 1.16 sec(FP32) to 0.64 sec(INT8).

However, while running the same experiment on Xavier, the performance drops from 0.72 sec(FP32) to 1.08 sec(INT8).

Any guidance/blog to follow for running tflite models on jetson devices. Am I am doing something wrong?

Thanks in advance,
Sapna

Hi,

Nano doesn’t support INT8 inference due to hardware limitations.
Could you double-check if the inference fallback to other precision instead?

First, please make sure you have maximized the device performance as below:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

For tflite model, you can deploy it with TensorFlow or TensorRT.
TensorRT can give you an optimized performance but need an ONNX format as intermediate.

You can find the benchmark table for Jetson on ResNet50 below.
We can get 824fps (only inference) on XavierNX:

Thanks.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.