Unable to convert ONNX to TRT after upgrading TensorRT from 8.5.2 to tensorRT 8.6.1 in NVIDIA ORIN

Description

Upgraded the TensorRT to 8.6.1 on NVIDIA Jetson ORIN dev kit and then when we try to built the TRT engine with the new flags as per the new updates, we get the below error.

ERROR:
“Cuda failure: CUDA driver version is insufficient for CUDA runtime version
Aborted (core dumped)”

Environment

TensorRT Version: 8.6.1.6
GPU Type: NVIDIA JETSON ORIN
Nvidia Driver Version: Not sure, p[lease suggest commands to find it on NVIDIA ORIN (Note: Tried to run nvidia-smi but it doesn’t work in Orin dev kit.
CUDA Version: 12.2.140
CUDNN Version: 8.6.0.166
Operating System + Version: Ubuntu 20.04.6 LTS
Python Version (if applicable): Python 3.8
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 2.0.1
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach any demo transformer based ONNX model

Steps To Reproduce

os.system(f"/usr/src/tensorrt/bin/trtexec --onnx={onnx_model_path} --saveEngine={trt_model_path} --fp8 --verbose")

Hi @khachit.basetti ,
Apologies for delayed response.
We request you to raise the concern on JEtson Forum.

Thanks