Getting started witth Tensorflow to TRT conversion

It seems that there are two scenarios to convert Tensorflow model to TRT;

  1. using of frozen graph approach with further use of convert-to-uff.
  2. using NVIDIA containers of TensorFlow for TF->TRT

Q: Considering the concern is MobileNet ssd v2 object detection model, which of the two ways above to approach?
ref:

Hi,

First, the sample is out-of-dated which is based on JetPack3.2.
It’s more recommended to check our built-in sample directly:

/usr/src/tensorrt/samples/

The sample convert the model into TensorRT engine via uff.from_tensorflow_frozen_model and is identical to the convert-to-uff.

Not sure which TF->TRT usage are you interested in.
Currently, we recommends user to convert their model into onnx format with keras2onnx or tf2onnx due to uff parser is deprecated.
This can be achieved in the l4t-tensorflow:r32.4.3-tf1.15-py3 docker since it has TensorFlow pre-installed.

Another common use cause is TF-TRT, which run TensorRT acceleration directly from TensorFlow package.

This is also supported on the l4t-tensorflow:r32.4.3-tf1.15-py3 docker since the TensorFlow is pre-installed.
You don’t need to convert the model into onnx or uff for TF-TRT but the performance is lower due to overhead.

Thanks.

Thank you for the extended response!
Could you extend, how to approach h.5 / .engine file in order to get it converted in a format compatible with DeepStream, please?
ref: Given there is .engine file & h5, how to incorporate it into Deepstream? - #2 by Andrey1984

Hi,

Please convert the .h5 into TensorRT .engine first.
Some sample from community can be found here:
https://github.com/jeng1220/KerasToTensorRT/blob/master/tftrt_example.py

After that, you can add the engine path to the configure file:
Ex. config_infer_primary.txt

[property]
...
model-engine-file=[your/file/name].engine

Thanks.