[TRT]network must have at least one output

Hi, I exported a model to ONNX from keras2onnx, and tried to load it to tensorRT with C++.
I have received the following error:

I tried so much methods but can not solve this. Please tell me how can I solve this. Thank you.

By the way,
I use tensorrt 6.0,
cuda 10.0,
cudnn 7.6,
jetpack 4.3

Anyone can help me?

Hi @lululu991129 ,
Can you please share your onnx model.
Also few things you may try,
Inspect if you have a valid onnx model.

import onnx
onnx.checker.check_model(onnx_model)

Also , are you using any custom plugin?
Thanks

model.zip (333.0 KB)
That’s my model above, please help me take a look.
This model should work, I’ve run it with onnxruntime and no issues.
Also, I didn’t use other plugins.
main.zip (1.2 KB)
That’s my procedure above which I used to thransform onnx model to trt engine with C++.
@AakankshaS

Please help me.

I have solved this problem with adding these codes:

explicit_batch = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
with trt.Builder(TRT_LOGGER) as builder, builder.create_network(explicit_batch) as network, trt.OnnxParser(network,
           TRT_LOGGER) as parser:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.