First, the sample is out-of-dated which is based on JetPack3.2.
It’s more recommended to check our built-in sample directly:
/usr/src/tensorrt/samples/
The sample convert the model into TensorRT engine via uff.from_tensorflow_frozen_model and is identical to the convert-to-uff.
Not sure which TF->TRT usage are you interested in.
Currently, we recommends user to convert their model into onnx format with keras2onnx or tf2onnx due to uff parser is deprecated.
This can be achieved in the l4t-tensorflow:r32.4.3-tf1.15-py3 docker since it has TensorFlow pre-installed.
Another common use cause is TF-TRT, which run TensorRT acceleration directly from TensorFlow package.
This is also supported on the l4t-tensorflow:r32.4.3-tf1.15-py3 docker since the TensorFlow is pre-installed.
You don’t need to convert the model into onnx or uff for TF-TRT but the performance is lower due to overhead.