Inference time not stable for Jetson Nano with TensorRT

Hi,

Have you maximized the clock first?

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

Please note that the default clock mode is dynamic, which adjusted for power saving.
So it may take much longer if the clock rate is low.

Thanks.