How to choose an efficient computing model on nano

Hi everyone:
We are selecting nano devices for the project to deploy our algorithm.We install the native caffe framework on jetson nano and run the algorithm at a speed of nearly 90ms; Use tensorRT on jetson nano,use fp18(it seems that int 8 is not supported) to run out nearly 70ms speed.
whether there is a network model of real-time target detection algorithm on jetson nano (single frame time is less than 40ms)? and use any frame or acceleration library you specify.
By the way,The same algorithm we ran for 40ms on movidius and 35ms on rk3399 using the tengine accelerator library.
We can provide an algorithm demo if necessary.

                                                                                                          Best regards!

Hi,

Please try if maximizing the device performance helps first.

sudo nvpmodel -m 0
sudo jetson_clocks

We have benchmarked several models for Nano here:
https://devblogs.nvidia.com/jetson-nano-ai-computing/

You can check if there is a suitable model for your use case.
Thanks.