Hello,
I am able to create an ONNX file of a simple resnet model, but creating TesnsorRT fails with the following error:
[09/04/2021-08:23:17] [I] === Model Options ===
[09/04/2021-08:23:17] [I] Format: ONNX
[09/04/2021-08:23:17] [I] Model: resnet50_onnx_model.onnx
[09/04/2021-08:23:17] [I] Output:
[09/04/2021-08:23:17] [I] === Build Options ===
[09/04/2021-08:23:17] [I] Max batch: explicit
[09/04/2021-08:23:17] [I] Workspace: 16 MB
[09/04/2021-08:23:17] [I] minTiming: 1
[09/04/2021-08:23:17] [I] avgTiming: 8
[09/04/2021-08:23:17] [I] Precision: FP32
[09/04/2021-08:23:17] [I] Calibration:
[09/04/2021-08:23:17] [I] Safe mode: Disabled
[09/04/2021-08:23:17] [I] Save engine: resnet_engine.trt
[09/04/2021-08:23:17] [I] Load engine:
[09/04/2021-08:23:17] [I] Inputs format: fp32:CHW
[09/04/2021-08:23:17] [I] Outputs format: fp32:CHW
[09/04/2021-08:23:17] [I] Input build shapes: model
[09/04/2021-08:23:17] [I] === System Options ===
[09/04/2021-08:23:17] [I] Device: 0
[09/04/2021-08:23:17] [I] DLACore:
[09/04/2021-08:23:17] [I] Plugins:
[09/04/2021-08:23:17] [I] === Inference Options ===
[09/04/2021-08:23:17] [I] Batch: Explicit
[09/04/2021-08:23:17] [I] Iterations: 10 (200 ms warm up)
[09/04/2021-08:23:17] [I] Duration: 10s
[09/04/2021-08:23:17] [I] Sleep time: 0ms
[09/04/2021-08:23:17] [I] Streams: 1
[09/04/2021-08:23:17] [I] Spin-wait: Disabled
[09/04/2021-08:23:17] [I] Multithreading: Enabled
[09/04/2021-08:23:17] [I] CUDA Graph: Disabled
[09/04/2021-08:23:17] [I] Skip inference: Disabled
[09/04/2021-08:23:17] [I] === Reporting Options ===
[09/04/2021-08:23:17] [I] Verbose: Disabled
[09/04/2021-08:23:17] [I] Averages: 10 inferences
[09/04/2021-08:23:17] [I] Percentile: 99
[09/04/2021-08:23:17] [I] Dump output: Disabled
[09/04/2021-08:23:17] [I] Profile: Disabled
[09/04/2021-08:23:17] [I] Export timing to JSON file:
[09/04/2021-08:23:17] [I] Export profile to JSON file:
[09/04/2021-08:23:17] [I]
----------------------------------------------------------------
Input filename: resnet50_onnx_model.onnx
ONNX IR version: 0.0.4
Opset version: 9
Producer name: pytorch
Producer version: 1.1
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
[09/04/2021-08:23:18] [E] [TRT] (Unnamed Layer* 0) [Convolution]: at least 4 dimensions are required for input
While parsing node number 1 [BatchNormalization]:
ERROR: builtin_op_importers.cpp:695 In function importBatchNormalization:
[6] Assertion failed: scale_weights.shape == weights_shape
[09/04/2021-08:23:18] [E] Failed to parse onnx file
[09/04/2021-08:23:18] [E] Parsing model failed
[09/04/2021-08:23:18] [E] Engine could not be created
My ONNX generation code is:
import torchvision.models as models
resnet50 = models.resnet50(pretrained=True, progress=False).eval()
import torch
BATCH_SIZE = 32
dummy_input=torch.randn(BATCH_SIZE, 3, 224, 224)
import torch.onnx
torch.onnx.export(resnet50, dummy_input, "resnet50_onnx_model.onnx",
input_names=['input'],
output_names=['output'],
verbose = False,
)
The command that I use for creating the tensorrt file is:
trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine.trt --explicitBatch
I am using:
TensorRT 6.0.1.5
CUDA 10.0
ONNX 1.5.0
Pytorch 1.1
Any help is appreciated.
Thanks in advance.