Parse onnx failed

Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”):
Software Version
DRIVE OS Linux 5.2.0
[yes] DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version

Target Operating System

Hardware Platform
[yes] NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)

SDK Manager Version

Host Machine Version
[yes] native Ubuntu 18.04

When i use trtexec to transform onnx model, the error is:
[08/23/2021-11:11:41] [I] === Model Options ===
[08/23/2021-11:11:41] [I] Format: ONNX
[08/23/2021-11:11:41] [I] Model: ./model/stage1.onnx
[08/23/2021-11:11:41] [I] Output:
[08/23/2021-11:11:41] [I] === Build Options ===
[08/23/2021-11:11:41] [I] Max batch: explicit
[08/23/2021-11:11:41] [I] Workspace: 16 MB
[08/23/2021-11:11:41] [I] minTiming: 1
[08/23/2021-11:11:41] [I] avgTiming: 8
[08/23/2021-11:11:41] [I] Precision: FP32
[08/23/2021-11:11:41] [I] Calibration:
[08/23/2021-11:11:41] [I] Safe mode: Disabled
[08/23/2021-11:11:41] [I] Save engine:
[08/23/2021-11:11:41] [I] Load engine:
[08/23/2021-11:11:41] [I] Inputs format: fp32:CHW
[08/23/2021-11:11:41] [I] Outputs format: fp32:CHW
[08/23/2021-11:11:41] [I] Input build shapes: model
[08/23/2021-11:11:41] [I] === System Options ===
[08/23/2021-11:11:41] [I] Device: 0
[08/23/2021-11:11:41] [I] DLACore:
[08/23/2021-11:11:41] [I] Plugins:
[08/23/2021-11:11:41] [I] === Inference Options ===
[08/23/2021-11:11:41] [I] Batch: 1
[08/23/2021-11:11:41] [I] Input inference shapes: model
[08/23/2021-11:11:41] [I] Iterations: 10 (200 ms warm up)
[08/23/2021-11:11:41] [I] Duration: 10s
[08/23/2021-11:11:41] [I] Sleep time: 0ms
[08/23/2021-11:11:41] [I] Streams: 1
[08/23/2021-11:11:41] [I] Spin-wait: Disabled
[08/23/2021-11:11:41] [I] Multithreading: Enabled
[08/23/2021-11:11:41] [I] CUDA Graph: Disabled
[08/23/2021-11:11:41] [I] Skip inference: Disabled
[08/23/2021-11:11:41] [I] Consistency: Disabled
[08/23/2021-11:11:41] [I] === Reporting Options ===
[08/23/2021-11:11:41] [I] Verbose: Disabled
[08/23/2021-11:11:41] [I] Averages: 10 inferences
[08/23/2021-11:11:41] [I] Percentile: 99
[08/23/2021-11:11:41] [I] Dump output: Disabled
[08/23/2021-11:11:41] [I] Profile: Disabled
[08/23/2021-11:11:41] [I] Export timing to JSON file:
[08/23/2021-11:11:41] [I] Export profile to JSON file:
[08/23/2021-11:11:41] [I]

Input filename: ./model/stage1.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: pytorch
Producer version: 1.8
Model version: 0
Doc string:

ERROR: ModelImporter.cpp:93 In function parseGraph:
[8] Assertion failed: convertOnnxWeights(initializer, &weights, ctx)
[08/23/2021-11:11:42] [E] Failed to parse onnx file
[08/23/2021-11:11:43] [E] Parsing model failed
terminate called after throwing an instance of ‘std::runtime_error’
what(): Failed to create object
Aborted (core dumped)


Could you run trtexec with --verbose and share the detail log with us?

Hi, @AastaLLL
Thanks for your reply. Here is the verbose log:
test.log (18.1 KB)

I have resloved it. The error reason is onnx version is too high. Use onnx 10 is ok.

1 Like

Dear @wang_chen2,
Glad to hear you have resolved the issue.