Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other
Target Operating System
Linux
QNX
other
Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other
SDK Manager Version
1.6.0.8170
other
Host Machine Version
native Ubuntu 18.04
other
I 'd like to parse my onnx model using TensorRT6.5.0.
However I got the following error.
I don’t have this error withTensorRT6.3.0
[TensorRT] VERBOSE: ModelImporter.cpp:108: Parsing node: Conv_0 [Conv]
[TensorRT] VERBOSE: ModelImporter.cpp:124: Searching for input: input.1
[TensorRT] VERBOSE: ModelImporter.cpp:124: Searching for input: 236
[TensorRT] VERBOSE: ModelImporter.cpp:124: Searching for input: 237
[TensorRT] VERBOSE: ModelImporter.cpp:130: Conv_0 [Conv] inputs: [input.1 -> (1, 3, 512, 512)], [236 -> (64, 3, 7, 7)], [237 -> (64)],
[TensorRT] VERBOSE: builtin_op_importers.cpp:441: Convolution input dimensions: (1, 3, 512, 512)
[TensorRT] ERROR: Parameter check failed at: ../builder/Network.cpp::addConvolutionNd::521, condition: int(kernelWeights.type) >= 0 && int(kernelWeights.type) < EnumMax<DataType>()
ERROR: Failed to parse the ONNX file.
In node 0 (importConv): UNSUPPORTED_NODE: Assertion failed: layer
Conv_0 node Weight/Bias Information.
Weight
name : 236
kind : Initializer
type : float32[64,3,7,7]
Bias
name : 237
kind : Initializer
type : float32[64]