Base Tensorflow Inference performance

Hello,

I’ve built out a basic inference setup using tensorflow and a SSD trained on MS coco. My base performance with this setup is about 1.3 FPS. I haven’t done much to optimize the model yet but this seemed slow to me. I’m also getting a very long start up time (30-60 sec) after the model is loaded but before the graph can perform its first inference (I’ve checked and its the first inference the that program is hanging on). In additoin when I check the performance of the process performing inference (currently have the inference in its own process) the cpu usage is at 100% at all times. Just wanted to check and see if this was normal and if there were any small changes I could make to speed this up.

OS: 4.2
TF version: 1.13.1+NV19.5
Model: ssd_mobilenet_v1
nvpmodel: MODE_30W_ALL

Hi,

It’s recommended to use TensorRT instead.
We have optimized the implementation with Jetson which can give you a much better performance.

Here is sample for your reference:

ssd_mobilenet_v1 is one of our testing model and we can get ~28fps on the Jetson Nano.
It’s expected to have a much higher fps with Xaiver.

By the way, please remember to fix clock to the maximal after setting 30W mode.

sudo jetson_clocks

Thanks.