I just convert tensorflow1’s frozen.pb to uff and use tensorrt to inference it.My don’t know how to process it,I think my step is correct,maybe someone can help me to explain it,I think the correct process is as follows:
- train model with tensorflow1 and frozen variable into graphdef as frozen.pb
- convert frozen.pb to uff model
- if uff model contains no supported operator I need to register it as plugin and use python or c++ inference it.
Converting up_sampling2d/ResizeBilinear as custom op: ResizeBilinear
- derialize uff file to engine.plan and use tensorrt predict it.
I had tried to convert it with but to onnx,but some error:
2020-11-02 16:54:08,778 - ERROR - Failed to convert node ‘up_sampling2d_2/ResizeBilinear’ (fct=<bound method Resize.version_7 of <class ‘tf2onnx.onnx_opset.nn.Resize’>>)
‘OP=Upsample\nName=up_sampling2d_2/ResizeBilinear\nInputs:\n\tre_lu_9/Relu:0=Relu, [-1, 256, 256, 64], 1\n\tup_sampling2d_2/mul:0=Mul, , 6\nOutpus:\n\tup_sampling2d_2/ResizeBilinear:0=[-1, -1, -1, 64], 1’
Traceback (most recent call last):
File “/home/liushuai/miniconda3/lib/python3.7/site-packages/tf2onnx/tfonnx.py”, line 286, in tensorflow_onnx_mapping
func(g, node, **kwargs)
File “/home/liushuai/miniconda3/lib/python3.7/site-packages/tf2onnx/onnx_opset/nn.py”, line 862, in version_7
target_shape = node.inputs.get_tensor_value()
File “/home/liushuai/miniconda3/lib/python3.7/site-packages/tf2onnx/graph.py”, line 316, in get_tensor_value
so I had to try frozen.pb --> uff–>plan.