convert_variables_to_constants_v2 returns a graph function, not a graph def.
Therefore, it has no .node attribute.
There is a vital step missing from your instructions.
Also, the github examples that you link do not serialize the frozen graph function to a .plan.
Please, update the instructions so that you can go from frozen graph function to a .plan.
After getting the frozen_graph_def, how do you suggest we can further serialize to a .plan? The ‘serialized_segment’ attr of ‘TRTEngineOp’ is actually empty.
I completely ditched the serialization to a .plan file since the documentation is hard to follow/incomplete.
Right now, the most straightforward way is to go from TF2 → ONNX and then parse your ONNX model with TensorRT C++ code. Still quite some work but at least its understandable and it actually functions.
I stumbled about the exact same problem: For the preceding steps of the pipeline the documentation nicely distinguishes between TF1 and TF2. However in section 2.10 it does not even explain that create_inference_graph is only usable in TF1 and it does not explain how to create a .plan file with TF2.
Is the recommended way for TF2 really to create ONNX first and then create the .plan from there?
Is there any update on this by now? I am having this problem for quite a while now, and I use the TF2->ONNX, ONNX->TensorRT approach now, but the inference time is a lot worse than when using TF-TRT.
I get an inference time of 25ms with TF-TRT, and 700ms with the TensorRT plan.