Hi ,
Currently I try porting my caffemodel to jetson nano by using tensorRT , the performance was not satisfactory ,
https://github.com/eric612/Jetson-nano-benchmark
I assume my implementation was bad , so I test on another project :
https://github.com/ginn24/Pelee-TensorRT
The inference time was between 60 ms ~ 90 ms , but it still have a gap from nano to tx2 , which can arrive to 70 fps (only inference , no pre-process) , is there have any speedup space except using official framework ?