[TensorRT] ERROR: Network must have at least one output

Description

I am trying to optimise my Mask-RCNN model using the ONNX parser. For this, I converted my model from h5 to pb (frozen graph): resnet50-coco-epoch1.pb

Next, I converted this pb file to ONNX model: resnet50-coco-epoch1.onnx.

During this step, I had to explicitly mention the input layer dims but not for the outputs:

$ python -m tf2onnx.convert --input '/content/resnet50-coco-epoch1.pb' --inputs input_image:0[2,480,480,3] --outputs mrcnn_class/Reshape:0 --output resnet50-imagenet-epoch5.onnx --opset 12

Even though I got the model, I noticed that its outputs still have unclear dimensions.

Next, I try to generate a tensorrt engine using

onnx_path = "resnet50-coco-epoch1.onnx"
batch_size = 1

model = ModelProto()
with open(onnx_path, "rb") as f:
  model.ParseFromString(f.read())

d0 = model.graph.input[0].type.tensor_type.shape.dim[1].dim_value
d1 = model.graph.input[0].type.tensor_type.shape.dim[2].dim_value
d2 = model.graph.input[0].type.tensor_type.shape.dim[3].dim_value
shape = [batch_size , d0, d1 ,d2]
engine = eng.build_engine(onnx_path, shape= shape)
eng.save_engine(engine, engine_name) 

But I get the error:

[TensorRT] ERROR: Parameter check failed at: ../builder/Network.cpp::addInput::671, condition: isValidDims(dims, hasImplicitBatchDimension())
[TensorRT] ERROR: Network must have at least one output

On going through past discussions, I noticed that I had to specify network output manually. On doing so,

network.mark_output(network.get_layer(network.num_layers - 1).get_output(0))

I get the error:

[TensorRT] ERROR: Parameter check failed at: ../builder/Network.cpp::addInput::671, condition: isValidDims(dims, hasImplicitBatchDimension())
False
python3: ../builder/Network.cpp:863: virtual nvinfer1::ILayer* nvinfer1::Network::getLayer(int) const: Assertion `layerIndex >= 0' failed.
Aborted

Any help is highly appreciated 🤠.

Environment

TensorRT Version: ‘6.0.1.10’
GPU Type: NVIDIA JETSON Xavier AGX inbuilt CUDA cores
L4T info: # R32 (release), REVISION: 3.1, GCID: 18186506, BOARD: t186ref, EABI: aarch64, DATE: Tue Dec 10 07:03:07 UTC 2019
Nvidia Driver Version:
CUDA Version: 10.0
CUDNN Version: 7.6.3
Operating System + Version: Ubuntu 18.04.5 LTS
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15-gpu
Baremetal or Container (if container which image + tag): Baremetal

Steps To Reproduce

Obtain the onnx model using the google collaboratory notebook HERE

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Thanks.

For the first set of commands, I got no errors:

>>> import sys
>>> import onnx
>>> engine_name = 'jenkem.plan'
>>> onnx_path = "resnet50-coco-epoch1.onnx"
>>> model = onnx.load(onnx_path)
>>> onnx.checker.check_model(model)
>>> 

The logs of trtexec are as follows:

$ ./trtexec --onnx=/mnt/nvme0n1p1/resnet50/onnx-engine/resnet50-coco-epoch1.onnx --verbose
&&&& RUNNING TensorRT.trtexec # ./trtexec --onnx=/mnt/nvme0n1p1/resnet50/onnx-engine/resnet50-coco-epoch1.onnx --verbose
[07/18/2021-09:16:25] [I] === Model Options ===
[07/18/2021-09:16:25] [I] Format: ONNX
[07/18/2021-09:16:25] [I] Model: /mnt/nvme0n1p1/resnet50/onnx-engine/resnet50-coco-epoch1.onnx
[07/18/2021-09:16:25] [I] Output:
[07/18/2021-09:16:25] [I] === Build Options ===
[07/18/2021-09:16:25] [I] Max batch: 1
[07/18/2021-09:16:25] [I] Workspace: 16 MB
[07/18/2021-09:16:25] [I] minTiming: 1
[07/18/2021-09:16:25] [I] avgTiming: 8
[07/18/2021-09:16:25] [I] Precision: FP32
[07/18/2021-09:16:25] [I] Calibration: 
[07/18/2021-09:16:25] [I] Safe mode: Disabled
[07/18/2021-09:16:25] [I] Save engine: 
[07/18/2021-09:16:25] [I] Load engine: 
[07/18/2021-09:16:25] [I] Inputs format: fp32:CHW
[07/18/2021-09:16:25] [I] Outputs format: fp32:CHW
[07/18/2021-09:16:25] [I] Input build shapes: model
[07/18/2021-09:16:25] [I] === System Options ===
[07/18/2021-09:16:25] [I] Device: 0
[07/18/2021-09:16:25] [I] DLACore: 
[07/18/2021-09:16:25] [I] Plugins:
[07/18/2021-09:16:25] [I] === Inference Options ===
[07/18/2021-09:16:25] [I] Batch: 1
[07/18/2021-09:16:25] [I] Iterations: 10 (200 ms warm up)
[07/18/2021-09:16:25] [I] Duration: 10s
[07/18/2021-09:16:25] [I] Sleep time: 0ms
[07/18/2021-09:16:25] [I] Streams: 1
[07/18/2021-09:16:25] [I] Spin-wait: Disabled
[07/18/2021-09:16:25] [I] Multithreading: Enabled
[07/18/2021-09:16:25] [I] CUDA Graph: Disabled
[07/18/2021-09:16:25] [I] Skip inference: Disabled
[07/18/2021-09:16:25] [I] Input inference shapes: model
[07/18/2021-09:16:25] [I] === Reporting Options ===
[07/18/2021-09:16:25] [I] Verbose: Enabled
[07/18/2021-09:16:25] [I] Averages: 10 inferences
[07/18/2021-09:16:25] [I] Percentile: 99
[07/18/2021-09:16:25] [I] Dump output: Disabled
[07/18/2021-09:16:25] [I] Profile: Disabled
[07/18/2021-09:16:25] [I] Export timing to JSON file: 
[07/18/2021-09:16:25] [I] Export profile to JSON file: 
[07/18/2021-09:16:25] [I] 
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - GridAnchorRect_TRT
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - NMS_TRT
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - Reorg_TRT
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - Region_TRT
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - Clip_TRT
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - LReLU_TRT
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - PriorBox_TRT
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - Normalize_TRT
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - RPROI_TRT
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[07/18/2021-09:16:25] [V] [TRT] Plugin Creator registration succeeded - FlattenConcat_TRT
----------------------------------------------------------------
Input filename:   /mnt/nvme0n1p1/resnet50/onnx-engine/resnet50-coco-epoch1.onnx
ONNX IR version:  0.0.7
Opset version:    12
Producer name:    tf2onnx
Producer version: 1.10.0
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.7) than this parser was built against (0.0.3).
[07/18/2021-09:16:27] [E] [TRT] Parameter check failed at: ../builder/Network.cpp::addInput::671, condition: isValidDims(dims, hasImplicitBatchDimension())
ERROR: ModelImporter.cpp:80 In function importInput:
[8] Assertion failed: *tensor = importer_ctx->network()->addInput( input.name().c_str(), trt_dtype, trt_dims)
[07/18/2021-09:16:27] [E] Failed to parse onnx file
[07/18/2021-09:16:27] [E] Parsing model failed
[07/18/2021-09:16:27] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/mnt/nvme0n1p1/resnet50/onnx-engine/resnet50-coco-epoch1.onnx --verbose

Even providing inputs as

./trtexec --onnx=/mnt/nvme0n1p1/resnet50/onnx-engine/resnet50-coco-epoch1.onnx --shapes=input:2x3x480x480 --verbose

leads to same error. The model is provided in the question above.

Hi @pradan ,
Can you please try upgrading to the latest TRT version and try the same.

Thanks!

that would mean flashing Jetson with latest JetPack right ?

Hi, pradan.

I have a question. you set your input dimension ad [2,480,480,3],what is 2 mean? batchsize? or it is a for dimension input? if with batchsize ,it will be a five dimension vector line [batchsize,2,480,480,3]

Try to update your TRT version to tensorrt8.0

its [N,H,W,C] … N is batch-size

I think updating trt to 8.0 can’t save my missing input and output layers.

@pardan

Let’s try another way first. in tf2onnx doc, i see there are three formats of tfmodels were supportted.

Try to tranfer your pb file to saved-model&checkpoint format, and try to specify the dimension of input/output dimensions in tensorflow(actually i am not familar with tf, i don’t know whether this can be done).

And upload your new format model here, i will also try it.

#### --saved-model

TensorFlow model as saved_model. We expect the path to the saved_model directory.

#### --checkpoint

TensorFlow model as checkpoint. We expect the path to the .meta file.

#### --input or --graphdef

TensorFlow model as graphdef file.

The problem is, the Mask-RCNN model is not loaded just like any other keras model

pretrained_model = tf.keras.applications.ResNet50()

Had this been possible, I would be able to easily convert them to saved Model format.

@pradan
in another post ,you say the model you use is tf1…?

how about transfer your tf1 model?

why change to keras now?

Mask RCNN actually uses both keras+TF1. To save and create the model, it uses keras. To save this keras model, the method described in the official docs is:

Model.save(
filepath,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None,
save_traces=True,
)

where argument

save_format: Either ‘tf’ or ‘h5’, indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to ‘tf’ in TF 2.X, and ‘h5’ in TF 1.X.

So, the default saving method on TF1 doesn’t provide frozen graph or savedModel format. That’s why I had to explicitly modify the training file to obtain the frozen graph.

I updated TensorRT to 8.0 by reflashin my Jetson and ran the test again:

root@virus-desktop:/usr/src/tensorrt/bin# ./trtexec --onnx=/home/virus/opt-models/res101-holygrail-ep26.onnx --verbose

Cropped Logs:

root@virus-desktop:/usr/src/tensorrt/bin# ./trtexec --onnx=/home/virus/opt-models/res101-holygrail-ep26.onnx --verbose > /home/virus/opt-models/logs1.txt
[W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[E] [TRT] ModelImporter.cpp:720: While parsing node number 595 [TopK -> "ROI/top_anchors:0"]:
[E] [TRT] ModelImporter.cpp:721: --- Begin node ---
[E] [TRT] ModelImporter.cpp:722: input: "ROI/strided_slice__165:0"
input: "Unsqueeze__180:0"
output: "ROI/top_anchors:0"
output: "ROI/top_anchors:1"
name: "ROI/top_anchors"
op_type: "TopK"
attribute {
  name: "sorted"
  i: 1
  type: INT
}

[E] [TRT] ModelImporter.cpp:723: --- End node ---
[E] [TRT] ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:4292 In function importTopK:
[8] Assertion failed: (inputs.at(1).is_weights()) && "This version of TensorRT only supports input K as an initializer."
[E] Failed to parse onnx file
[E] Parsing model failed
[E] Engine creation failed
[E] Engine set up failed

The entire logs can be found here: logs1.txt (576.7 KB)

The model looks bettter than the last time, with all layers having clearly defined metadata. The model can be found here: LINK.

Thanks for any help.

Hi @pradan,

Thank you for sharing the model. We could reproduce the same error using TensorRT 8.0.1
Please allow us sometime to look into this.
Looks like you’re using TensorFlow model. Following link may be helpful to you.
https://github.com/pskiran1/TensorRT-support-for-Tensorflow-2-Object-Detection-Models

Thank you.

Thanks for the resource but the repo uses savemodel format to obtain the ONNX model, which is not possible in our case, as we use TF1.

Hi,

Could you please try Polygraphy — Polygraphy 0.38.0 documentation, for better debugging.

Thank you.

I tried but it seems another hard work. The DataLoader part is not discussed well for image inputs. I would appreciate any alternative for that, if possible. Thanks

Hi,

Looks like the error is:

(inputs.at(1).is_weights()) && "This version of TensorRT only supports input K as an initializer."
i.e. the second input to top-k needs to be a constant.

Please try running constant folding with Polygraphy as seen in this example: TensorRT/tools/Polygraphy/examples/cli/surgeon/02_folding_constants at master · NVIDIA/TensorRT · GitHub

Thank you.

Hi @spolisetty Thanks for the efforts. I tried folding these layers/nodes and successfully obtained the folded ONNX model but still get the same error.

[09/25/2021-22:41:38] [W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[09/25/2021-22:41:38] [W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[09/25/2021-22:41:38] [W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[09/25/2021-22:41:39] [E] [TRT] ModelImporter.cpp:720: While parsing node number 556 [TopK -> "ROI/top_anchors:0"]:
[09/25/2021-22:41:39] [E] [TRT] ModelImporter.cpp:721: --- Begin node ---
[09/25/2021-22:41:39] [E] [TRT] ModelImporter.cpp:722: input: "ROI/strided_slice__165:0"
input: "Unsqueeze__180:0"
output: "ROI/top_anchors:0"
output: "ROI/top_anchors:1"
name: "ROI/top_anchors"
op_type: "TopK"
attribute {
  name: "sorted"
  i: 1
  type: INT
}

[09/25/2021-22:41:39] [E] [TRT] ModelImporter.cpp:723: --- End node ---
[09/25/2021-22:41:39] [E] [TRT] ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:4292 In function importTopK:
[8] Assertion failed: (inputs.at(1).is_weights()) && "This version of TensorRT only supports input K as an initializer."
[09/25/2021-22:41:39] [E] Failed to parse onnx file
[09/25/2021-22:41:39] [I] Finish parsing network model
[09/25/2021-22:41:39] [E] Parsing model failed
[09/25/2021-22:41:39] [E] Engine creation failed
[09/25/2021-22:41:39] [E] Engine set up failed

I obtained my original ONNX model by converting my pb graph.

frozen graph (.pb) → ONNX model

I observed the fault-prone layer ROI/top_anchors in both these models:

  • The folded model :


Is it possible that the method used to obtain the ONNX model from the frozen graph failed to retain the name of K (name: ROI/Minimum) and wrongly converted it to name: Unsqueeze__180:0 ?