Description
I have a frozen pb
model generated from TF, my goal is to convert this model into a trt
engine file and then deploy it on my own GPUs.
Firstly, I have tried to use tf2onnx
to convert my pb model, the command I used as follows
python -m tf2onnx.convert --input frozen_cnn.pb --inputs inputs/images:0 --outputs outputs/preds:0 --output model.onnx --verbose --opset 11
then I got a crnn.onnx
However, when converting this model.onnx
to trt
file, I got an error as follows:
Unsupported ONNX data type: UINT8 (2)
ERROR: images:0:188 In function importInput:
[8] Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype
I solved this error by following the instructions shown at this post, maybe it’s a wrong operation, I don’t know.
The second error is
ERROR: inputs/images:0_TRT_DYNAMIC_SHAPES:58 In function importInput:
[7] Assertion failed: convert_dims(onnx_tensor_type.shape().dim(), trt_dims)
Now I tried to refer Tensorrt dynamic shape sample, but due to the limited understanding of that sample, I can’t solve this problem. Could anyone give me some suggestions? I already my model to this post as you see. any cues would be highly appreciated.
Environment
TensorRT Version: 7.1
GPU Type: 2080Ti
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version:Ubuntu18.04
Python Version (if applicable): 3.7
Relevant Files
please check each link shown in the previous part.