different TensorFlow models format for TensorRT

Hi guys:
recently I am confused by the input models format of trt.
I search the official guide at
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#c_topics
says
ONNX: parser = nvonnxparser::createParser(network, gLogger);
NVCaffe: ICaffeParser
parser = createCaffeParser();
UFF: parser = createUffParser();

does it mean it just supports UFF model for tensorflow?

I want to know which docs give all the detail model format for TRT.

as far as I know, in tensorflow, there are .pb .uff .h5(Keras), does trt support these all format?

best wishes

yes only UFF parser for Tensorflow.

Its all about which file formats you can feed to a parser available in TensorRT.
A parser is an entry point for your model file to go into tensorRT’s optimisation pipeline.
As you stated, there are 3 parsers.
As their names suggest,
-onnx parser can read only onnx files.
-caffe parser needs caffe model
and uff parser needs a model in *uff format.

tensorflow’s frozen buffer or (.pb) file can be converted to (.uff) file.

thanks for your help