ONNX Parser error with TensorRT6.5.0

Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version

Target Operating System

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)

SDK Manager Version

Host Machine Version
native Ubuntu 18.04

I 'd like to parse my onnx model using TensorRT6.5.0.
However I got the following error.
I don’t have this error withTensorRT6.3.0

[TensorRT] VERBOSE: ModelImporter.cpp:108: Parsing node: Conv_0 [Conv]
[TensorRT] VERBOSE: ModelImporter.cpp:124: Searching for input: input.1
[TensorRT] VERBOSE: ModelImporter.cpp:124: Searching for input: 236
[TensorRT] VERBOSE: ModelImporter.cpp:124: Searching for input: 237
[TensorRT] VERBOSE: ModelImporter.cpp:130: Conv_0 [Conv] inputs: [input.1 -> (1, 3, 512, 512)], [236 -> (64, 3, 7, 7)], [237 -> (64)], 
[TensorRT] VERBOSE: builtin_op_importers.cpp:441: Convolution input dimensions: (1, 3, 512, 512)
[TensorRT] ERROR: Parameter check failed at: ../builder/Network.cpp::addConvolutionNd::521, condition: int(kernelWeights.type) >= 0 && int(kernelWeights.type) < EnumMax<DataType>()
ERROR: Failed to parse the ONNX file.
In node 0 (importConv): UNSUPPORTED_NODE: Assertion failed: layer

Conv_0 node Weight/Bias Information.

name : 236
kind : Initializer
type : float32[64,3,7,7]

name : 237
kind : Initializer
type : float32[64]

Hi @t.hoso,

This forum is for developers in NVIDIA DRIVE™ AGX SDK Developer Program. We will need you to use your account with corporate or university email address.

Or you can also change your current account to use corporate or university email address by following the below links:

My Profile | NVIDIA Developer → “Edit Profile” → “Change email” → “CHANGE”

Sorry for any inconvenience.

Sorry. I updated my account.

And I solved this issue.
Maybe I had some wrong PATH.

I’m sorry to bother you.

Dear @t.hoso,
Are you using trtexec to generating TRT engine? Have you checked this on host or target? Could you share simple model to repro this?