ONNX and tensorRT: ERROR: Network must have at least one output

Hi
I exported a model to ONNX from caffe, and tried to load it.I have received the following error:ERROR: Network must have at least one output.
i convert caffe model to mlmodel by coremltools, convert mlmodel to onnx model by onnxmltools.

#!/usr/bin/python
import coremltools
import onnxmltools
#import onnx
# Update your input name and path for your caffe model
proto_file = 'mnist.prototxt'
input_caffe_path = 'mnist.caffemodel'
binary_proto = 'mnist_mean.binaryproto'
# Update the output name and path for intermediate coreml model, or leave as is
output_coreml_model = 'mnist.mlmodel'
# Change this path to the output name and path for the onnx model
output_onnx_model = 'mnist1.onnx'

# Convert Caffe model to CoreML 
coreml_model = coremltools.converters.caffe.convert((input_caffe_path, proto_file))
#coreml_model = coremltools.converters.caffe.convert((input_caffe_path, proto_file, binary_proto), image_input_names="data")
# Save CoreML model
coreml_model.save(output_coreml_model)
# Load a Core ML model
coreml_model = coremltools.utils.load_spec(output_coreml_model)
# Convert the Core ML model into ONNX
onnx_model = onnxmltools.convert_coreml(coreml_model)
# Save as protobuf
onnxmltools.utils.save_model(onnx_model, output_onnx_model)

How can I resolve this?

Hi,

This error “ERROR: Network must have at least one output”, usually means that the TensorRT ONNX parser failed to parse your model. If using the Python API, try adding some error checking syntax to parsing your ONNX model, similar to this code: https://github.com/rmccorm4/tensorrt-utils/blob/master/classification/imagenet/onnx_to_tensorrt.py#L185-L191

An alternative would be to try parsing your model with “trtexec --onnx=model.onnx” for TensorRT 6, or “trtexec --onnx=model.onnx --explicitBatch” for TensorRT 7, which by default should give some better error messages.

Hi,Thank you for your reply.I followed this code:https://github.com/rmccorm4/tensorrt-utils/blob/master/classification/imagenet/onnx_to_tensorrt.py#L185-L191.
like this:python3 onnx_to_tensorrt.py --onnx mnist1.onnx -o mnist.engine,and i recived the following error:

2020-02-26 12:42:51 - __main__ - INFO - TRT_LOGGER Verbosity: Severity.ERROR
ERROR: Failed to parse the ONNX file: mnist1.onnx
In node 0 (importModel): INVALID_GRAPH: Assertion failed: tensors.count(input_name)

mnist1.onnx is a model transformed by mnist1.caffemodel through onnx.
How can I resolve this?
Thanks.

Following is the information I executed trtexec --onnx=mnist1.onnx

&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=mnist1.onnx
[01/26/2020-13:04:20] [I] === Model Options ===
[01/26/2020-13:04:20] [I] Format: ONNX
[01/26/2020-13:04:20] [I] Model: mnist1.onnx
[01/26/2020-13:04:20] [I] Output:
[01/26/2020-13:04:20] [I] === Build Options ===
[01/26/2020-13:04:20] [I] Max batch: 1
[01/26/2020-13:04:20] [I] Workspace: 16 MB
[01/26/2020-13:04:20] [I] minTiming: 1
[01/26/2020-13:04:20] [I] avgTiming: 8
[01/26/2020-13:04:20] [I] Precision: FP32
[01/26/2020-13:04:20] [I] Calibration: 
[01/26/2020-13:04:20] [I] Safe mode: Disabled
[01/26/2020-13:04:20] [I] Save engine: 
[01/26/2020-13:04:20] [I] Load engine: 
[01/26/2020-13:04:20] [I] Inputs format: fp32:CHW
[01/26/2020-13:04:20] [I] Outputs format: fp32:CHW
[01/26/2020-13:04:20] [I] Input build shapes: model
[01/26/2020-13:04:20] [I] === System Options ===
[01/26/2020-13:04:20] [I] Device: 0
[01/26/2020-13:04:20] [I] DLACore: 
[01/26/2020-13:04:20] [I] Plugins:
[01/26/2020-13:04:20] [I] === Inference Options ===
[01/26/2020-13:04:20] [I] Batch: 1
[01/26/2020-13:04:20] [I] Iterations: 10 (200 ms warm up)
[01/26/2020-13:04:20] [I] Duration: 10s
[01/26/2020-13:04:20] [I] Sleep time: 0ms
[01/26/2020-13:04:20] [I] Streams: 1
[01/26/2020-13:04:20] [I] Spin-wait: Disabled
[01/26/2020-13:04:20] [I] Multithreading: Enabled
[01/26/2020-13:04:20] [I] CUDA Graph: Disabled
[01/26/2020-13:04:20] [I] Skip inference: Disabled
[01/26/2020-13:04:20] [I] Input inference shapes: model
[01/26/2020-13:04:20] [I] === Reporting Options ===
[01/26/2020-13:04:20] [I] Verbose: Disabled
[01/26/2020-13:04:20] [I] Averages: 10 inferences
[01/26/2020-13:04:20] [I] Percentile: 99
[01/26/2020-13:04:20] [I] Dump output: Disabled
[01/26/2020-13:04:20] [I] Profile: Disabled
[01/26/2020-13:04:20] [I] Export timing to JSON file: 
[01/26/2020-13:04:20] [I] Export profile to JSON file: 
[01/26/2020-13:04:20] [I] 
----------------------------------------------------------------
Input filename:   mnist1.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    OnnxMLTools
Producer version: 1.6.0
Domain:           onnxconverter-common
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.6) than this parser was built against (0.0.3).
While parsing node number 0 [Mul]:
ERROR: ModelImporter.cpp:296 In function importModel:
[5] Assertion failed: tensors.count(input_name)
[01/26/2020-13:04:21] [E] Failed to parse onnx file
[01/26/2020-13:04:21] [E] Parsing model failed
[01/26/2020-13:04:21] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=mnist1.onnx

Thanks

Hi,

Seems like it might be an issue with the ONNX graph.

You could try building the latest TensorRT OSS ONNX parser as described in this issue and try again, or you could also try onnx-simplifier: https://github.com/NVIDIA/TensorRT/issues/284#issuecomment-572835659.