I succeeded to use onnx-simplifier with flag --skip-optimization.
But for RCNN model it’s still failed, this time because of super-big size:
ValueError: Message ONNX_REL_1_6.ModelProto exceeds maximum protobuf size of 2GB: 5451307912
I tried mobilenet from TF OD API, and simplifier with --skip-optimization worked. But still not loads in C++:
--- Begin node ---
output: "detection_classes:0"
name: "node_detection_classes:0"
op_type: "Constant"
attribute {
name: "value"
t {
dims: 1
dims: 50
data_type: 1
name: "detection_classes:0"
raw_data: "..."
}
type: TENSOR
}
--- End node ---
ERROR: /mnt/Data2/reps/TensorRT/parsers/onnx/ModelImporter.cpp:493 In function importModel:
[7] Assertion failed: _importer_ctx.tensors().at(output.name()).is_tensor()
Status of load: Can't parse model file
I’m also succeed to simplify normally without --skip-optimization mobilenet, taken other output node.
In Python simplified model loads without errors and pass check. But error is the same as above…
I’m also succeed to convert this model with different output to .onnx with opset 9 (before it did not work, however opset 9 is less error prone than 11). But still error when I try to load in +C+, this time different:
ONNX IR version: 0.0.6
Opset version: 9
Producer name: tf2onnx
Producer version: 1.6.0
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
[03/11/2020-13:50:11] [E] [TRT] Parameter check failed at: ../builder/Network.cpp::addInput::962, condition: inName != knownInputs->name
ERROR: /mnt/Data2/reps/TensorRT/parsers/onnx/ModelImporter.cpp:493 In function importModel:
[7] Assertion failed: _importer_ctx.tensors().at(output.name()).is_tensor()
ERROR: image_tensor:0:206 In function importInput:
[8] Assertion failed: *tensor = ctx->network()->addInput(input.name().c_str(), trtDtype, trt_dims)
Status of load: Can't parse model file
Looks like it’s close to impossible to load something from TF to TRT via any of parsers.
UFF even less chances than ONNX.
Possible to convert to cuda engine in python, but cuda engine is depending on architecture of particular card and doesnt work when deploy.
Could you please provide details on the platforms you are using so we can better help:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version
Python 3.6.9 but problem in C++ (or trtexec or whatever its the same library code)
TF 1.15.2, not using pytorch
TensorRT 7.0.0.11 but also tried to build latest master from Github
Also I had on start problem with unrecognized type UINT8 but I edited code manually and cast UINT8 to INT8 (not safe but obviously problem here not with it).
@SunilJB Hello, any news? Sorry seems I uploaded incorrect models (not good output). I updated drive with new one (bigger one) exported with usual outputs (detection_boxes:0,detection_scores:0,detection_classes:0).
Up.
Any updates? I am sorry, maybe something else I can try? I want very much to load at least something from OD without cuda engine binary file. Because it’s not normal solution for deployment. But after a month for trying I still can not load anything via ONNX or UFF (this one is hopeless).