TensorRT ONNX error

Description

I am trying to import FasterRCNN from ONNX to TensorRT and I recieved a parsing error in the resize operation.

Environment

TensorRT Version: 7.0.0.11
GPU Type: GeForce RTX 2070 Super
Nvidia Driver Version: 441.87
CUDA Version: 10.2
CUDNN Version: 7.6.5.32
Operating System + Version: Windows 10
Python Version (if applicable): -
TensorFlow Version (if applicable): -
PyTorch Version (if applicable): -
Baremetal or Container (if container which image + tag): -
ONNX version: 1.5
Opset version: 10

Steps To Reproduce

I am trying to import Faster RCNN model from ONNX to TensorRT. When I parse the model I get the following error while reading an operation:

UNKNOWN: ModelImporter.cpp:107: Parsing node: 415 [Resize]
UNKNOWN: ModelImporter.cpp:123: Searching for input: 388
UNKNOWN: ModelImporter.cpp:123: Searching for input: 415
UNKNOWN: ModelImporter.cpp:129: 415 [Resize] inputs: [388 -> (-1, 256, -1, -1)], [415 -> (4)],
While parsing node number 413 [Resize -> "416"]:
--- Begin node ---
input: "388"
input: "415"
output: "416"
name: "415"
op_type: "Resize"
attribute {
  name: "mode"
  s: "nearest"
  type: STRING
}

--- End node ---
ERROR: builtin_op_importers.cpp:2412 In function importResize:
[8] Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"

The code snippet is:
// 1) Create a network definition, import the model

    // 1.1.- Create the inference builder
    IBuilder* builder = createInferBuilder(gLogger);

    // Create the network definition
    const auto explicitBatch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
    INetworkDefinition* network = builder->createNetworkV2(explicitBatch);

    // 1.2.- Create onnx parser
    nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

    // 1.3.- Parse the model
    size_t verbosity{4};
    parser->parseFromFile(DEPLOY_FILE, verbosity);

    if (!parser->parse(DEPLOY_FILE, verbosity))
    {
        std::cout << "Failed to parse onnx file." << std::endl;
        return nullptr;
    }

I found similar issues in github like this. I do not know if I understood it well, so the nearest resize operation is not supported?
How could I solve this issue? Does upgrading to TensorRT 7.1 would help?

Thank you for your time

Can you try solution suggested in below link:


Opset11 model above with TRT7 + OSS components.

Thanks

1 Like

Oh sorry, its true. The answer was in front of me, my fault.

Thank you

Changing opset to 11 and trying to troubleshoot with your suggestions in previous post I recieved the following assertion of TensorRT;

While parsing node number 43 [Resize -> "339"]:
--- Begin node ---
input: "310"
input: "330"
input: "338"
input: "337"
output: "339"
name: "Resize_43"
op_type: "Resize"
attribute {
  name: "coordinate_transformation_mode"
  s: "pytorch_half_pixel"
  type: STRING
}
attribute {
  name: "cubic_coeff_a"
  f: -0.75
  type: FLOAT
}
attribute {
  name: "mode"
  s: "linear"
  type: STRING
}
attribute {
  name: "nearest_mode"
  s: "floor"
  type: STRING
}

--- End node ---
ERROR: ModelImporter.cpp:124 In function parseGraph:
[5] Assertion failed: ctx->tensors().count(inputName)

Looking at the neural network graph visualizer I got 4 resize layers that have the same issue:

The model checker from onnx did not output any message (I suppose this is good). Reading through the previous github issue, I wil try to run the mentioned onnx simplifier and see how it goes.

I am facing exact same issue, any update so far ?