Description
converting model to trt error
nvinfer1::rt::cuda::WinogradConvActRunner::WinogradConvActRunner(nvinfer1::rt::DefaultRunnerParameters, const nvinfer1::ConvolutionParameters&, const std::vector&): Assertion `matchNbDims(mSrcTensors[0], mDstTensors[0]) && (mSrcTensors.size() == 1 || matchValidDims(popFront(mSrcTensors[1].extent), popFront(mDstTensors[0].extent)))’ failed.
Aborted (core dumped)
Environment
TensorRT Version: 7
GPU Type: 930M
Nvidia Driver Version: 440
CUDA Version: 10.2
CUDNN Version: 7.6
Operating System + Version: ubuntu18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
./trtexec --explicitBatch --onnx=model.onnx
&&&& RUNNING TensorRT.trtexec # ./trtexec --explicitBatch --onnx=model.onnx
[04/12/2020-15:53:38] [I] === Model Options ===
[04/12/2020-15:53:38] [I] Format: ONNX
[04/12/2020-15:53:38] [I] Model: model.onnx
[04/12/2020-15:53:38] [I] Output:
[04/12/2020-15:53:38] [I] === Build Options ===
[04/12/2020-15:53:38] [I] Max batch: explicit
[04/12/2020-15:53:38] [I] Workspace: 16 MB
[04/12/2020-15:53:38] [I] minTiming: 1
[04/12/2020-15:53:38] [I] avgTiming: 8
[04/12/2020-15:53:38] [I] Precision: FP32
[04/12/2020-15:53:38] [I] Calibration:
[04/12/2020-15:53:38] [I] Safe mode: Disabled
[04/12/2020-15:53:38] [I] Save engine:
[04/12/2020-15:53:38] [I] Load engine:
[04/12/2020-15:53:38] [I] Inputs format: fp32:CHW
[04/12/2020-15:53:38] [I] Outputs format: fp32:CHW
[04/12/2020-15:53:38] [I] Input build shapes: model
[04/12/2020-15:53:38] [I] === System Options ===
[04/12/2020-15:53:38] [I] Device: 0
[04/12/2020-15:53:38] [I] DLACore:
[04/12/2020-15:53:38] [I] Plugins:
[04/12/2020-15:53:38] [I] === Inference Options ===
[04/12/2020-15:53:38] [I] Batch: Explicit
[04/12/2020-15:53:38] [I] Iterations: 10
[04/12/2020-15:53:38] [I] Duration: 3s (+ 200ms warm up)
[04/12/2020-15:53:38] [I] Sleep time: 0ms
[04/12/2020-15:53:38] [I] Streams: 1
[04/12/2020-15:53:38] [I] ExposeDMA: Disabled
[04/12/2020-15:53:38] [I] Spin-wait: Disabled
[04/12/2020-15:53:38] [I] Multithreading: Disabled
[04/12/2020-15:53:38] [I] CUDA Graph: Disabled
[04/12/2020-15:53:38] [I] Skip inference: Disabled
[04/12/2020-15:53:38] [I] Inputs:
[04/12/2020-15:53:38] [I] === Reporting Options ===
[04/12/2020-15:53:38] [I] Verbose: Disabled
[04/12/2020-15:53:38] [I] Averages: 10 inferences
[04/12/2020-15:53:38] [I] Percentile: 99
[04/12/2020-15:53:38] [I] Dump output: Disabled
[04/12/2020-15:53:38] [I] Profile: Disabled
[04/12/2020-15:53:38] [I] Export timing to JSON file:
[04/12/2020-15:53:38] [I] Export output to JSON file:
[04/12/2020-15:53:38] [I] Export profile to JSON file:
[04/12/2020-15:53:38] [I]
Input filename: model.onnx
ONNX IR version: 0.0.4
Opset version: 9
Producer name: pytorch
Producer version: 1.3
Domain:
Model version: 0
Doc string:
[04/12/2020-15:53:39] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:53:39] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:53:39] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:53:39] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:53:39] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:53:39] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/12/2020-15:53:39] [W] [TRT] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[04/12/2020-15:53:39] [W] [TRT] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[04/12/2020-15:53:39] [W] [TRT] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[04/12/2020-15:53:39] [W] [TRT] Unused Input: learned_1
[04/12/2020-15:53:39] [W] [TRT] Unused Input: learned_2
[04/12/2020-15:53:39] [W] [TRT] Unused Input: learned_3
[04/12/2020-15:53:46] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
trtexec: …/rtExt/cuda/customWinogradConvActRunner.cpp:48: nvinfer1::rt::cuda::WinogradConvActRunner::WinogradConvActRunner(nvinfer1::rt::DefaultRunnerParameters, const nvinfer1::ConvolutionParameters&, const std::vector&): Assertion `matchNbDims(mSrcTensors[0], mDstTensors[0]) && (mSrcTensors.size() == 1 || matchValidDims(popFront(mSrcTensors[1].extent), popFront(mDstTensors[0].extent)))’ failed.
Aborted (core dumped)
Steps To Reproduce
pytorch model converted to onnx after it using ./trtexec --explicitBatch --onnx=model.onnx to convert