Getting Tensorflow based mobilenet SSD to run with TensorRT for inference speed up

Hi there,

I have tensorflow 1.7 installed on my Jetson TX2, and I run inference off of a mobilenet-SSD model for a computer vision application with between 10-15 FPS. I know that the TensorRT is optimized for inference on the TX2 and was hoping for a way to port over my tensorflow protobuf graph to TensorRT. Is there a way to do this? If not what is the best way to utilize tensorRT with a custom trained mobilenet SSD model?
Thanks

Hello,

Included within the TensorRT Python API is the UFF API; a package that contains a set of utilities to convert trained models from various frameworks to a common format.

The UFF API is located in uff/uff.html and contains two conversion type tool classes called Tensorflow Modelstream to UFF and Tensorflow Frozen Protobuf Model to UFF.

Please reference: UFF Converter — NVIDIA TensorRT Standard Python API Documentation 8.4.3 documentation

regards,
NVIDIA Enterprise Support