How Can I Convert Tensorflow Object Detection Model (SSD-Mobilenet) to TensorRT model for inference on Jetson TX1?

Description

Hello
I have trained (Tuned) the custom tensorflow object detection model from the custom dataset and I want to convert it to a TensorRT model for inferencing on Jetson TX1 Board.
I have to try inferencing without converting it to TensorRT but I get 3 or 4 FPS on board which is very slow…
I have to convert this model to the TensorRT model.
how can I do that?
I just have some weight and checkpoint files and nothing more…
can anyone help me with how can I convert the model?

Environment

TensorRT Version: 8.2.1
GPU Type: Jetson TX 1
Nvidia Driver Version: I don’t know
CUDA Version: 10.2
CUDNN Version: 8.2.1
Operating System + Version: Ubuntu 18.04 (L4T 32.7.1)
Python Version (if applicable): 3.6

Hi,

This looks like a Jetson issue. Please refer to the below samples in case useful.

For any further assistance, we will move this post to to Jetson related forum.

Thanks!

@NVES
Actually, I ask this issue in one of the topics on Jetson Community and they say “this is an AI issue” 😄😄
anyway, thanks for your help and links.
if I have any questions, I will ask you about them.
Good Luck 🌸✋

@NVES
thank you for your response
I have run some ssd-mobilenet models on the docker container.
and it was very fast (almost 50fps)
but I want to train a model on the custom dataset and convert that to a TensorRT model for fast inferencing.
what can I do?

Hi @Didehban.nv ,
You can check on the below approaches to proceed with tensorflow model to trt .
1 - You can try using tf-trt conversion using the mentioned link.
2 - You can try converting your tf model to onnx and the further do the onnx to trt conversion.
Thank you.