This specific issue is arising because the ONNX Parser isn’t currently compatible with the ONNX models exported from Pytorch 1.3 - If you downgrade to Pytorch 1.2, this issue should go away.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Onnx -> tensorrt fp32 conversion performance degradation different outputs | 4 | 2057 | November 29, 2022 | |
Running a pytorch network converted to ONNX with TensorRT on the TX2 | 24 | 8886 | October 18, 2021 | |
[TensorRT] ERROR: Network must have at least one output [TensorRT] ERROR: Network validation failed | 10 | 2775 | October 16, 2020 | |
[TensorRT] ERROR: Network must have at least one output | 29 | 2393 | September 30, 2021 | |
TensorRT cannot parse ONNX model | 5 | 1799 | June 18, 2020 | |
I am trying to convert the ONNX SSD mobilnet v3 model into TensorRT Engine. I am getting the below error | 24 | 3706 | February 17, 2022 | |
Onnx to trt conversion | 8 | 801 | April 21, 2020 | |
Convert onnx to trt format using trtexec | 5 | 5366 | August 17, 2021 | |
Assertion Error in buildMemGraph: 0 (mg.nodes[mg.regionIndices[outputRegion]].size == mg.nodes[mg.regionIndices[inputRegion]].size) | 10 | 1293 | October 12, 2021 | |
Pytorch -> onnx -> tensorrt (trtexec) _for deeplabv3 | 5 | 3051 | April 20, 2020 |