I’ve built out a basic inference setup using tensorflow and a SSD trained on MS coco. My base performance with this setup is about 1.3 FPS. I haven’t done much to optimize the model yet but this seemed slow to me. I’m also getting a very long start up time (30-60 sec) after the model is loaded but before the graph can perform its first inference (I’ve checked and its the first inference the that program is hanging on). In additoin when I check the performance of the process performing inference (currently have the inference in its own process) the cpu usage is at 100% at all times. Just wanted to check and see if this was normal and if there were any small changes I could make to speed this up.
TF version: 1.13.1+NV19.5