Low FPS with TensorRT enabled Tensorflow Object Detection API Models

Hello, I’m attempting to get a custom object detection model running on a nano at 20-30 fps, with the highest accuracy possible. I’ve been following this guide:
https://www.dlology.com/blog/how-to-run-tensorflow-object-detection-model-on-jetson-nano/

and trying to work with ssd mobilenet v1/v2 and ssdlite mobilenet v2, from the model zoo here: models/tf1_detection_zoo.md at master · tensorflow/models · GitHub

Using the method provided in the dlology link for converting to tensort and running the code, I get 8-10 FPS with ssd mobilenet v1, 6-8 FPS with ssdlite mobilenet v2, and ssd mobilenet v2 crashes the nano when I try to create the TRT graph (I think due to OOM). Is there a better “official” way of working with the tensorflow object detection API? Or is there a noob mistake of some sort I’m making here?

Thanks!

EDIT: all these models are running at 300x300 resolution

Sorry for the late response, have issue been resolved or still need the suggestions?

Hi,

Do you use TF-TRT for inference?
For a stand-alone TensorRT app, we can get around 43fps for a 300x300 SSD Mobilenet-V1 on Nano.
You can find a reproducible source in the below web page:

Thanks.