ONNX -> TensorRT convertAxis assertion failed

Hello. I’m trying to convert default faster rcnn inception from TF OD API to TensorRT.

I’m sucessfully converted model to ONNX format and tryin to convert it now to trt using trtexec or C++ parser code, but Im faced this error:

While parsing node number 265 [Concat -> "GridAnchorGenerator/Meshgrid_1/ExpandedShape/concat:0"]:
--- Begin node ---
input: "GridAnchorGenerator/Meshgrid_1/ExpandedShape/Slice:0"
input: "const_fold_opt__2097"
input: "GridAnchorGenerator/Meshgrid_1/ExpandedShape/Slice_1:0"
output: "GridAnchorGenerator/Meshgrid_1/ExpandedShape/concat:0"
name: "GridAnchorGenerator/Meshgrid_1/ExpandedShape/concat"
op_type: "Concat"
attribute {
  name: "axis"
  i: 0
  type: INT

--- End node ---
ERROR: /mnt/Data2/reps/TensorRT/parsers/onnx/onnx2trt_utils.cpp:203 In function convertAxis:
[8] Assertion failed: axis >= 0 && axis < nbDims
[03/03/2020-18:00:41] [E] Failed to parse onnx file

Any chance to fix it or workaround?
TensorRT and onnx-tensorrt are the latest built from masters.


Can you try few things:

  1. Check ONNX model using checker function and see if it passes?
import onnx
model = onnx.load("model.onnx")
  1. If (1) passes, maybe try onnx-simplifier on it.

  2. If (2) doesn’t work, could you try to see if anything looks off in Netron when viewing the failing nodes

Please refer below link, in case it helps:


Thank you for answer.

  1. Yes, I tried, no errors:
>>> model = onnx.load('model.onnx')
>>> onnx.checker.check_model(model)
  1. I get this error with simplifier:

I succeeded to use onnx-simplifier with flag --skip-optimization.
But for RCNN model it’s still failed, this time because of super-big size:

ValueError: Message ONNX_REL_1_6.ModelProto exceeds maximum protobuf size of 2GB: 5451307912

I tried mobilenet from TF OD API, and simplifier with --skip-optimization worked. But still not loads in C++:

--- Begin node ---
output: "detection_classes:0"
name: "node_detection_classes:0"
op_type: "Constant"
attribute {
  name: "value"
  t {
    dims: 1
    dims: 50
    data_type: 1
    name: "detection_classes:0"
    raw_data: "..."
  type: TENSOR

--- End node ---
ERROR: /mnt/Data2/reps/TensorRT/parsers/onnx/ModelImporter.cpp:493 In function importModel:
[7] Assertion failed: _importer_ctx.tensors().at(output.name()).is_tensor()
Status of load: Can't parse model file

I’m also succeed to simplify normally without --skip-optimization mobilenet, taken other output node.
In Python simplified model loads without errors and pass check. But error is the same as above…

I’m also succeed to convert this model with different output to .onnx with opset 9 (before it did not work, however opset 9 is less error prone than 11). But still error when I try to load in +C+, this time different:

ONNX IR version:  0.0.6
Opset version:    9
Producer name:    tf2onnx
Producer version: 1.6.0
Model version:    0
Doc string:       
[03/11/2020-13:50:11] [E] [TRT] Parameter check failed at: ../builder/Network.cpp::addInput::962, condition: inName != knownInputs->name
ERROR: /mnt/Data2/reps/TensorRT/parsers/onnx/ModelImporter.cpp:493 In function importModel:
[7] Assertion failed: _importer_ctx.tensors().at(output.name()).is_tensor()
ERROR: image_tensor:0:206 In function importInput:
[8] Assertion failed: *tensor = ctx->network()->addInput(input.name().c_str(), trtDtype, trt_dims)
Status of load: Can't parse model file

Looks like it’s close to impossible to load something from TF to TRT via any of parsers.
UFF even less chances than ONNX.

Possible to convert to cuda engine in python, but cuda engine is depending on architecture of particular card and doesnt work when deploy.


Could you please share the model file so we can help better?
Meanwhile you can try to use trtexec command line tool in --verbose mode.



Sure, thanks.

Here are two models: https://drive.google.com/drive/folders/1fcxDDo0rcfBvdA-xZqXyZQe6PtMlXvgR?usp=sharing
After export to ONNX and after simplifier.

Here are outputs with verbose for regular and simplified:

Thanks for sharing the model files, we will look into it and update you.

Could you please provide details on the platforms you are using so we can better help:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version


Of course:

  1. Ubuntu 18.04.4 LTS - Linux 5.3.0-40-generic
  2. 1060 Ti
  3. nvidia-driver-440
  4. Cuda 10.2
  5. Cudnn 7.6.5
  6. Python 3.6.9 but problem in C++ (or trtexec or whatever its the same library code)
  7. TF 1.15.2, not using pytorch
  8. TensorRT but also tried to build latest master from Github

Also I had on start problem with unrecognized type UINT8 but I edited code manually and cast UINT8 to INT8 (not safe but obviously problem here not with it).

@SunilJB Hello, any news? Sorry seems I uploaded incorrect models (not good output). I updated drive with new one (bigger one) exported with usual outputs (detection_boxes:0,detection_scores:0,detection_classes:0).

Hi @WildChlamydia,
Sorry for the delay.
We are looking into it. As soon as we have any updates will share it on forum.


Thank you very much for your work

Also, if I export without any tricks, just take network, run tf2onnx and try to load in trtexec:

ERROR: TensorRT/parsers/onnx/ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: NonZero

I think the easiest way is to beat this? But how to avoid it if it is not implemented?
Here is model with NonZero error: https://drive.google.com/file/d/1ZEj2iUTOjJHVcgu8wLFdGl0O3-ctFP6m/view?usp=sharing

P.S. Can you rename topic to something like “Load TF OD API models via ONNX into TensorRT?”

Any updates? I am sorry, maybe something else I can try? I want very much to load at least something from OD without cuda engine binary file. Because it’s not normal solution for deployment. But after a month for trying I still can not load anything via ONNX or UFF (this one is hopeless).

Up, still no progress. Many other developers faced the same in this thread: https://github.com/onnx/onnx-tensorrt/issues/401

URL to ONNX mode died, here is new: https://drive.google.com/open?id=1Iz9jcemvZdrtzbZlzRht1uARrroP5ulP

1 Like

Looks like the model inputs are in uint8_t, which is unsupported

TensorRT 7.0 does not support NonZero op. Please refer to below supported op link: