Question about inference speed

Hi ,
Currently I try porting my caffemodel to jetson nano by using tensorRT , the performance was not satisfactory ,
https://github.com/eric612/Jetson-nano-benchmark

I assume my implementation was bad , so I test on another project :
https://github.com/ginn24/Pelee-TensorRT

The inference time was between 60 ms ~ 90 ms , but it still have a gap from nano to tx2 , which can arrive to 70 fps (only inference , no pre-process) , is there have any speedup space except using official framework ?

Hi,

It’s recommended to test your model with TensorRT trtexec first.
This can give you a basic profiling result of each layer.

cp -r /usr/src/tensorrt/ .
cd tensorrt/bin/
./trtexec --deploy=/path/to/prototxt --output=/name/of/output

Thanks.