The problem of generating ONNX from Pytorch

Hi, I have faced a problem which is “[TensorRT] ERROR: Network must have at least one output”. I have read some answers about this problem, I think It may be caused by some unsupported operation in ONNX, which is generated by the torch.onnx.export.

I just want to know whether there are some tools or tutorials about how to check unsupported operation in original ONNX and generate supported ONNX.

Thank you very much.

https://devtalk.nvidia.com/default/topic/1064287/tensorrt/typeerror-build_cuda_engine-incompatible-function-arguments/post/5389284/#5389284

That will only work if your output layer is at the end of your network. So if you have model like YOLO which has 3 output nodes throughout the network it probably wont work as expected during inference. Btw I’m not expert but thats what i went through. Sure there will be someone to provide better info.

Hi dubowen91,

Did the above answer solve your problem?