Folowing the tutorial from the notebook tensorflow-onnx/ConvertingSSDMobilenetToONNX.ipynb at main · onnx/tensorflow-onnx · GitHub I am trying to work with a mobilenetv2 and v3 frozen models from tensorflow frozen_inference_graph.pb or a saved_model.pb to convert to ONNX and to TensorRT files.
Under NGC dockers 20.01-tf1-py3 and 19.05-py3 I am using both this and tensorflow-onnx projects.
I alwaysget different issues, the furthest I got was under 20.01-tf1-py3 with both onnx-tensorrt and tensorflow-onnx on master branchs and install the projects from source.
I was able to create the .onnx file, but when I try to create the .trt file I get the following.
onnx2trt /media/bnascimento/project/frozen_inference_graph.onnx -o /media/bnascimento/project/frozen_inference_graph.trt
----------------------------------------------------------------
Input filename: /media/bnascimento/project/frozen_inference_graph.onnx
ONNX IR version: 0.0.6
Opset version: 10
Producer name: tf2onnx
Producer version: 1.6.0
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
Parsing model
Unsupported ONNX data type: UINT8 (2)
ERROR: image_tensor:0:190 In function importInput:
[8] Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype)
I suspect this has to do with the input tensor for the image, but I dont know how to avoid this issue. Anyone with similar issues before?
Cheers
Bruno