TensorRT onnx parser - Parse a MaxPool layer with two outputs raise en exception of std::out_of_range

Hello,

This is my configuration:
Linux distro and version - Linux-x86_64, Ubuntu, 16.04
Python - 3.5.2
VS2017 x64 - Cross platform extension
torch-1.1.0.dist-info
torchsummary-1.5.1.dist-info
torchvision-0.3.0.dist-info
Tensorflow Python\C++ (TF)- 1.9 (C++ version was built from sources)
TensorRT C++ (TRT) - 7.0.0.11
TensorRT Open Source https://github.com/NVIDIA/TensorRT - master
GPU type - GeForce GTX 1080
nvidia driver version - 418.40.04
CUDA version - Release 9.0, V9.0.252

I have a SegNet model which implemeted and trained via PyTorch, Intended to be inferenced via TRT C++.

When I activate this function:

auto parsed = m_onnxParser->parseFromFile(
       fileName.string().c_str(), static_cast<int>(nvinfer1::ILogger::Severity::kINFO));

I’m getting the following report:

----------------------------------------------------------------
Input filename:   ../../../../Data/SegNet/Semantic_Segmentation/segNet.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.1
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
......
some verbose information
......
ModelImporter.cpp:107: Parsing node:  [MaxPool]
VERBOSE:
ModelImporter.cpp:123: Searching for input: 188
VERBOSE:
ModelImporter.cpp:129:  [MaxPool] inputs: [188 -> (1, 64, 400, 400)],
VERBOSE:
ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 12) [Pooling] for ONNX node:
VERBOSE:
ImporterContext.hpp:97: Registering tensor: 195 for ONNX tensor: 195
Exception - invalid vector<T> subscript!!!

I used the TensorRT OSS package to investigate what is the problem source.
I built its master version from source on my Linux station successfully and executed the onnx2trt application.

I got that the exception is raised in:
File - ModelImporter.cpp
Function - parseGraph
Line number - 172
Line -

auto& output = outputs.at(i);

I made some further debugging and I realized that this line:

GET_VALUE(importFunc(ctx, node, nodeInputs), &outputs);

fill the outputs vector with only one element.
While this line:

node.output().size()

return 2 which is actually true.

And this is the root cause of the exception.
Why importFunc return a vector with only one element?

Please advise.

Regard

Hi orong13,

If you trace through the code a bit, I think the fact that the parser is aware there are 2 outputs, but is only returning 1, is probably an assumption that pooling layers only have 1 output.

I think you can see that in the code for poolingHelper here, which gets called for MaxPool op: https://github.com/onnx/onnx-tensorrt/blob/84b5be1d6fc03564f2c0dba85a2ee75bad242c2e/onnx2trt_utils.cpp#L1140

You might be able to tweak these couple of lines and re-build the OSS components to get a multi-output tensor from it, though I haven’t tried before.

Hello,
I traced the poolingHelper code and based on my understanding the root cause of the problem is that the nvinfer1::IPoolingLayer doesn’t support two outputs at all.

The logic of this code assume that there is only one output (Line#1140):

tensorPtr = poolingLayer->getOutput(0);

And there is no second output at all.

If I’m trying to access to the second output I’m getting this:

poolingLayer->getOutput(0)
0x24833e60
poolingLayer->getOutput(1)
0x0

While the input node actually hold all required informtion about the two outputs.

So, based on my analysis, I don’t think that it possible to change the code while mandatory class such as nvinfer1::IPoolingLayer doesn’t support two outputs capability.

Based on the following link:
https://github.com/onnx/onnx/blob/master/docs/Operators.md#MaxPool
Under the MaxPool label, it is written that optionaly two outputs shall be supported.

Conclusion, nvinfer1::IPoolingLayer shall be updated to support this optional output as well.

Please correct me if I wrong or approve,

Regard,

Hi orong13,

Sorry for the delay. Unfortunately TensorRT doesn’t support the 2-output variant of pooling layers at the moment, so that’s why it’s written that way in the ONNX-TensorRT parser.

Thanks NVES_R for your clarification!

Based on your information about the ONNX-TRT parser status:

  1. Is it possible to check\to verify that this capability will be part of the next releases?
  2. Till next releases will be published, based on my TRT past experience, I can only get forward with my segnet model if I will be able to replace the orignal pooling layer with PlugIn. Please correct me if I'm worng? Are there any cases which PlugIn cannot help at all?
  3. If PlugIn can help, I have an additional Topic that I opened about a problem to register a PlugIn with the ONNX-TRT parser: https://devtalk.nvidia.com/default/topic/1068292/tensorrt/custom-layer-plugin-tensorrtc-nvuffparser-iuffparser-vs-tensorrt-c-nvonnxparser-iparser/ This topic is still open....

    The punch line is, can NVIDIA provide a End2End full exmaple of ONNX TRT PlugIn and not just the PlugIn itself but also the application level which activate the ONNX parser and register the PlugIn?
    (As it provide for the Uff and Caffe parsers…)

  4. Thanks,

Hi,

  1. I don’t think there is enough demand at the moment, there is a large list of feature requests.
  2. Yes I think for now you’d need to use a plugin
  3. We’re working on this, I’m pushing for a good end2nd sample since ONNX is now the best supported parser.

Thanks NVES_R for your detailed response!

I will wait for NVIDIA end2end sample of the TRT Onnx PlugIn.

In the mean time, I just want to share that I saw in the TRT Onnx parser operation report several verbose prints about Onnx Plugins registration,
for example:
VERBOSE:
Plugin creator registration succeeded - ONNXTRT_NAMESPACE::GridAnchor_TRT
VERBOSE:
Plugin creator registration succeeded - ONNXTRT_NAMESPACE::NMS_TRT
VERBOSE:
Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Reorg_TRT

And there are more…

Maybe NVIDIA can share these Plugins code as end2end examples?

I can also share my segnet model material that can help you to help me to understand why I cannot succeed to register a Plugin to the TRT Onnx parser…

Thanks,

Hello,
I tried to check this issue with TensorRT version 7.0.0.1 and the results were the same:

  • Onnx paser raie an exception on the MaxPool due to two outputs

  • Still PlugIn isn’t activated by the nvonnxparser::IParser parseFromFile API and my MaxPool and MaxUnPool plugins cannot be integrated into the model

I also tried to edit all MaxPool and MaxUnPool nodes using the onnx-graphsurgeon 0.2.0 tool and changed their names and operators names to “MaxPoolName” and “MaxUnPoolName” and then tried to declare my PlugIn with the same names but they still weren’t registered.

I contrast, for the Uff parser all this logic is working perfect.

I can provide any required material that will help to analyze the problem.

Please advise.