Hi,
These are two different frameworks: TensorFlow and TensorRT.
1.
It’s possible to inference TensorFlow in C++ interface without converting the model.
You will need a TensorFlow C++ library. Please check this topic for some information:
2.
However, we recommends to convert your model into TensorRT which is an optimizer for GPU-based inference.
The first step is to check if all the used operation are supported by TensorRT first:
If yes, you will need to convert the TensorFlow into an intermediate model format (UFF/ONNX) as TensorRT input.
3.
There is also an alternative to convert the model into TensorRT within TensorFlow directly.
You can check this sample for information:
https://github.com/tensorflow/tensorrt/tree/master/tftrt/examples/object_detection
Thanks.