Error~ trtexec --onnx=yolov3.onnxError...

I had execute trtexec like this but error…
What is the problem? Help me…

Env : Jetpack 4.3

jetson@jetson-nano:~/my-project/Deep-Stream-ONNX/yolov3$ trtexec --onnx=yolov3.onnx
&&&& RUNNING TensorRT.trtexec # trtexec --onnx=yolov3.onnx
[02/02/2020-22:16:18] [I] === Model Options ===
[02/02/2020-22:16:18] [I] Format: ONNX
[02/02/2020-22:16:18] [I] Model: yolov3.onnx
[02/02/2020-22:16:18] [I] Output:
[02/02/2020-22:16:18] [I] === Build Options ===
[02/02/2020-22:16:18] [I] Max batch: 1
[02/02/2020-22:16:18] [I] Workspace: 16 MB
[02/02/2020-22:16:18] [I] minTiming: 1
[02/02/2020-22:16:18] [I] avgTiming: 8
[02/02/2020-22:16:18] [I] Precision: FP32
[02/02/2020-22:16:18] [I] Calibration:
[02/02/2020-22:16:18] [I] Safe mode: Disabled
[02/02/2020-22:16:18] [I] Save engine:
[02/02/2020-22:16:18] [I] Load engine:
[02/02/2020-22:16:18] [I] Inputs format: fp32:CHW
[02/02/2020-22:16:18] [I] Outputs format: fp32:CHW
[02/02/2020-22:16:18] [I] Input build shapes: model
[02/02/2020-22:16:18] [I] === System Options ===
[02/02/2020-22:16:18] [I] Device: 0
[02/02/2020-22:16:18] [I] DLACore:
[02/02/2020-22:16:18] [I] Plugins:
[02/02/2020-22:16:18] [I] === Inference Options ===
[02/02/2020-22:16:18] [I] Batch: 1
[02/02/2020-22:16:18] [I] Iterations: 10 (200 ms warm up)
[02/02/2020-22:16:18] [I] Duration: 10s
[02/02/2020-22:16:18] [I] Sleep time: 0ms
[02/02/2020-22:16:18] [I] Streams: 1
[02/02/2020-22:16:18] [I] Spin-wait: Disabled
[02/02/2020-22:16:18] [I] Multithreading: Enabled
[02/02/2020-22:16:18] [I] CUDA Graph: Disabled
[02/02/2020-22:16:18] [I] Skip inference: Disabled
[02/02/2020-22:16:18] [I] Input inference shapes: model
[02/02/2020-22:16:18] [I] === Reporting Options ===
[02/02/2020-22:16:18] [I] Verbose: Disabled
[02/02/2020-22:16:18] [I] Averages: 10 inferences
[02/02/2020-22:16:18] [I] Percentile: 99
[02/02/2020-22:16:18] [I] Dump output: Disabled
[02/02/2020-22:16:18] [I] Profile: Disabled
[02/02/2020-22:16:18] [I] Export timing to JSON file:
[02/02/2020-22:16:18] [I] Export profile to JSON file:
[02/02/2020-22:16:18] [I]

Input filename: yolov3.onnx
ONNX IR version: 0.0.5
Opset version: 10
Producer name: keras2onnx
Producer version: 1.5.1
Domain: onnx
Model version: 0
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3).
[02/02/2020-22:16:22] [E] [TRT] Parameter check failed at: …/builder/Network.cpp::addInput::671, condition: isValidDims(dims, hasImplicitBatchDimension())
ERROR: ModelImporter.cpp:80 In function importInput:
[8] Assertion failed: *tensor = importer_ctx->network()->addInput( input.name().c_str(), trt_dtype, trt_dims)
[02/02/2020-22:16:22] [E] Failed to parse onnx file
[02/02/2020-22:16:23] [E] Parsing model failed
[02/02/2020-22:16:23] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # trtexec --onnx=yolov3.onnx

Hi,

Based on the error it seems to be issue related to unsupported input node may be due to multiple reasons, but can’t tell specifically.
Could you please share your ONNX model so we can better help?
Meanwhile please try “–verbose” mode in trtexec to get more detailed information about the issue.

Thanks

Here is model & log information. Thank you for your help…

ONNX Model info.
-Model : Yolo v3
-Opset Verison : 10
-Accuracy : mAP of 0.553
-Input : Resized image (1x3x416x416) Original image size (1x2) which is [image.size[1], image.size[0]]
-Download : https://onnxzoo.blob.core.windows.net/models/opset_10/yolov3/yolov3.onnx
-Verbose error log

jetson@jetson-nano:~/my-project/Deep-Stream-ONNX/yolov3$ trtexec --onnx=yolov3.onnx --verbose
&&&& RUNNING TensorRT.trtexec # trtexec --onnx=yolov3.onnx --verbose
[02/03/2020-13:50:14] [I] === Model Options ===
[02/03/2020-13:50:14] [I] Format: ONNX
[02/03/2020-13:50:14] [I] Model: yolov3.onnx
[02/03/2020-13:50:14] [I] Output:
[02/03/2020-13:50:14] [I] === Build Options ===
[02/03/2020-13:50:14] [I] Max batch: 1
[02/03/2020-13:50:14] [I] Workspace: 16 MB
[02/03/2020-13:50:14] [I] minTiming: 1
[02/03/2020-13:50:14] [I] avgTiming: 8
[02/03/2020-13:50:14] [I] Precision: FP32
[02/03/2020-13:50:14] [I] Calibration:
[02/03/2020-13:50:14] [I] Safe mode: Disabled
[02/03/2020-13:50:14] [I] Save engine:
[02/03/2020-13:50:14] [I] Load engine:
[02/03/2020-13:50:14] [I] Inputs format: fp32:CHW
[02/03/2020-13:50:14] [I] Outputs format: fp32:CHW
[02/03/2020-13:50:14] [I] Input build shapes: model
[02/03/2020-13:50:14] [I] === System Options ===
[02/03/2020-13:50:14] [I] Device: 0
[02/03/2020-13:50:14] [I] DLACore:
[02/03/2020-13:50:14] [I] Plugins:
[02/03/2020-13:50:14] [I] === Inference Options ===
[02/03/2020-13:50:14] [I] Batch: 1
[02/03/2020-13:50:14] [I] Iterations: 10 (200 ms warm up)
[02/03/2020-13:50:14] [I] Duration: 10s
[02/03/2020-13:50:14] [I] Sleep time: 0ms
[02/03/2020-13:50:14] [I] Streams: 1
[02/03/2020-13:50:14] [I] Spin-wait: Disabled
[02/03/2020-13:50:14] [I] Multithreading: Enabled
[02/03/2020-13:50:14] [I] CUDA Graph: Disabled
[02/03/2020-13:50:14] [I] Skip inference: Disabled
[02/03/2020-13:50:14] [I] Input inference shapes: model
[02/03/2020-13:50:14] [I] === Reporting Options ===
[02/03/2020-13:50:14] [I] Verbose: Enabled
[02/03/2020-13:50:14] [I] Averages: 10 inferences
[02/03/2020-13:50:14] [I] Percentile: 99
[02/03/2020-13:50:14] [I] Dump output: Disabled
[02/03/2020-13:50:14] [I] Profile: Disabled
[02/03/2020-13:50:14] [I] Export timing to JSON file:
[02/03/2020-13:50:14] [I] Export profile to JSON file:
[02/03/2020-13:50:14] [I]
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - GridAnchorRect_TRT
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - NMS_TRT
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - Reorg_TRT
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - Region_TRT
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - Clip_TRT
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - LReLU_TRT
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - PriorBox_TRT
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - Normalize_TRT
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - RPROI_TRT
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[02/03/2020-13:50:14] [V] [TRT] Plugin Creator registration succeeded - FlattenConcat_TRT

Input filename: yolov3.onnx
ONNX IR version: 0.0.5
Opset version: 10
Producer name: keras2onnx
Producer version: 1.5.1
Domain: onnx
Model version: 0
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3).
[02/03/2020-13:50:21] [E] [TRT] Parameter check failed at: …/builder/Network.cpp::addInput::671, condition: isValidDims(dims, hasImplicitBatchDimension())
ERROR: ModelImporter.cpp:80 In function importInput:
[8] Assertion failed: *tensor = importer_ctx->network()->addInput( input.name().c_str(), trt_dtype, trt_dims)
[02/03/2020-13:50:21] [E] Failed to parse onnx file
[02/03/2020-13:50:21] [E] Parsing model failed
[02/03/2020-13:50:21] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # trtexec --onnx=yolov3.onnx --verbose
jetson@jetson-nano:~/my-project/Deep-Stream-ONNX/yolov3$

Fix should be available in next release.
Also, model has NonMaxSuppression layer, which is currently not supported in TRT. You might have to create a custom plugin to add that support while using next TRT release.

Thanks