[TensorRT] ERROR: Network must have at least one output

Ubuntu 16.04 LTS
GPU type:1050Ti
nvidia driver version:390.87
CUDA version:9.0
CUDNN version:7.13
Python version:3.5
TensorRT version:

My model is trained by pytorch and converted to onnx file.
When I useed onnx parser to build cuda engine ,the error appeared:
[TensorRT] ERROR: Network must have at least one output
pure virtual method called
terminate called without an active exception
Aborted (core dumped)

My code is uploaded here and onnx file,some detail is:
onnx file:https://drive.google.com/open?id=19xSFuO8hl7Kgr479IXdBOsDEfOylRjlp
weights file:https://drive.google.com/open?id=19XqJF16siYCL3QpmNypPx7T-RmTTAtFC
You also can find code here:https://github.com/MoonBunnyZZZ/trt-debug

First step, using yolo/pt20nnx.py to convert model to onnx file
Second step, running yolo/onnx_yolov3.py to test

yolo-debug-engine.zip (12.6 KB)



I encountered the same problem. But it disappeared after I updated TensorRT to latest version:

I am using version:

Does the problem still exist? Which python are you using by the way? I am currently using 3.5.6.