TensorRT or Tflite or ONNX for EfficientDet using Jetson TX2

Hello, I trained a model using the Tensorflow Object Detection API, then I freeze the model with the lastest checkpoint in the training, this generated a .pb file. Then, I have loaded the frozen model into my Jetson TX2 and performed inference using the trained model. The inference looks great, however, I noticed that the inference time for the trained model using the Jetson TX2 is around 2 FPS. I read that different optimizers can reduce the inference time without affecting the performance of the model, some of the optimers are: TensorRT, TFlite, and ONNX. With these information, the following questions arised:

  • What is the difference of the aforementioned optimizers?
  • How can I know if a model is supported by the optimizers?
  • Which optimizer is more suitable for an application using the Jetson TX2 (I believe that the TensorRT)?

Thank you in advanced for all the support.

Hi,

1. TensorRT is an GPU-optimized inference engine. TFLite and ONNX is a different model format.

2. Most of frameworks are layer-based.
So you will need to check if all the layers of your model are supported.

Below is TensorRT support matrix for your reference:

3. Yes, it’s recommended to use TensorRT.
You can find a example for deploying a TFOD model with TensorRT below:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.