and I was able to load it back and infer with python. Though I’d like to do the loading and inference with C++. Is there an example on how to implement the following python lines?
This is the TensorFlow sample.
The C++ version is identical to the C++ usage of TensorFlow.
But since you have applied the TensorRT acceleration.
It’s recommended to convert the model into pure TensorRT since it is better optimized for Jetson.
So the TrtGraphConverterV2 converts a saved_model format to some ‘optimized’ format that needs to be converted again to UFF format and only then loaded by C++ code?
Q: When will TensorRT support layer XYZ required by my network in the UFF parser?
A: UFF is deprecated. We recommend users switch their workflows to ONNX. The TensorRT ONNX parser is an open source project.
These are two different methods: TF-TRT and pure TensorRT.
In TF-TRF, there is an option to apply the TensorRT optimization.
The C++ and python interfaces should be very similar.
In pure TensorRT, you will need to convert the model into uff (v1.15.x) or ONNX (v2.x).
And then feed it into TensorRT to generate the engine.
Since TensorRT does the optimization based on the hardware information.
The engine (both TF-TRT and pure TensorRT) is strongly hardware-dependent and cannot use cross-platform.