Triton inference server with tensorRT, Error code 1 : Cask (Cask convolution execution)

Description

Hello.
i had a problem.

i was creatived to Triton inference server
my onnx model was working in triton inference server but, tensorRT model doesn’t working in my server.

here is my problem, i was attached two picture

First, this is my nvidia-driver toolkit and CUDA

Second, My problem in triton inference server

please understanding whatever i have weak English skills :((

Thank you for reading my Topic

Environment

TensorRT Version: 8.6.16
GPU Type: NVIDIA GeForce RTX 3060
Nvidia Driver Version: 536.40
CUDA Version: 12.2
CUDNN Version: 12.1
Operating System + Version: windows 11
Python Version (if applicable): 3.9

Hi,
We recommend you to raise this query in TRITON Inference Server Github instance issues section.

Thanks!