ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).

I am trying to import an ONNX model and get this error…

WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
While parsing node number 0 [Conv]:
ERROR: ModelImporter.cpp:296 In function importModel:
[5] Assertion failed: tensors.count(input_name)

I have Latest TensorRT 6.0x and latest ONNX installed…

This model was exported from pytorch using the ONNX exporter as per this sample: torch.onnx — PyTorch master documentation

any ideas ?

1 Like

Hello,

I think that I have the same problem.

This is my configuration:
Windows 10
Pyton - 3.6.8
torch-1.1.0.dist-info
torchsummary-1.5.1.dist-info
torchvision-0.3.0.dist-info
TensorRT - 6.0.1.5
CuDNN - 7.6.3
CUDA - 9.0

I have a segNet CNN implemented by torch and I converted it to onnx using these commands:

dummy_input = torch.randn(1, 32, 400, 400, device='cuda')

input_names = ["Input"]
output_names = ["Output"]

torch.onnx.export(model, dummy_input, "segNet.onnx", verbose=True, input_names=input_names,
                  output_names=output_names)

At the beginning the command torch.onnx.export was failed due to unsupported max_unpool2d command.
So I updated the file symbolic.py inside the onnx directory of the torch package:

def max_unpool2d(g, self, indices, output_size):
    return g.op("max_unpool2d", self, indices, output_size)

After this update the operation torch.onnx.export startes ro work properly without any errors and a segNet.onnx file was successfully generated.

But when i activated this operation:

auto parsed = m_onnxParser->parseFromFile(
			fileName.string().c_str(), static_cast<int>(nvinfer1::ILogger::Severity::kINFO));

I got the following report:

----------------------------------------------------------------
Input filename:   segNet.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.1
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
While parsing node number 0 [Cast]:
ERROR: builtin_op_importers.cpp:727 In function importCast:
[8] Assertion failed: trt_dtype == nvinfer1::DataType::kHALF && cast_dtype == ::ONNX_NAMESPACE::TensorProto::FLOAT

Any help will be much appreciated

Hi,

Could you please share your script and model file so we can better help?

Thanks

do you want the ONNX model and code used to load it in TRT?

or do you want the Pytorch model and script to convert to TRT ?

Hello,

I understand now, that my major problem with the TRT Onnx is differnet so I opened a new topic:
https://devtalk.nvidia.com/default/topic/1068292/tensorrt/custom-layer-plugin-tensorrtc-nvuffparser-iuffparser-vs-tensorrt-c-nvonnxparser-iparser/

good luck here.

Hi,

Can you provide the following information so we can better help?
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version

Also, if possible please share the script & model (pytorch and ONNX) file to reproduce the issue.

Thanks

I have same issue.
Ubuntu 18.04
GeForce GTX 1080ti
Driver Version: 418.87.00
CUDA Version: 10.1
libcudnn.so.7.6.2
python3.6
tf 1.14
tensorrt 6
Deeplab model converted to onnx with tf2onnx with opset 11. But have this error in trtexec command:
trtexec --onnx=frozen_inference_graph.onnx

WARNING: ONNX model has a newer ir_version (0.0.6) than this parser was built against (0.0.3).
While parsing node number 0 [Cast]:
ERROR: builtin_op_importers.cpp:727 In function importCast:
[8] Assertion failed: trt_dtype == nvinfer1::DataType::kHALF && cast_dtype == ::ONNX_NAMESPACE::TensorProto::FLOAT
[03/08/2020-09:00:00] [E] Failed to parse onnx file
[03/08/2020-09:00:00] [E] Parsing model failed
[03/08/2020-09:00:00] [E] Engine could not be created