I can’t build TRT Engine for ONNX object detection model taken from TensorRT samples in
trtexec fails with
Cuda Runtime (no kernel image is available for execution on the device)
The error and logs are the same no matter how produced: via
trtexec utility or python code in
TensorRT Version: 8.6.1
GPU Type: GeForce GTX 860M
Nvidia Driver Version: 535.86.05
CUDA Version: 11.8
CUDNN Version: 8.9.0
Operating System + Version: Ubuntu 20.04
Python Version: 3.8.10
deviceQuery.txt (2.4 KB)
nvidia-smi.txt (1.8 KB)
trtexec.txt (418.6 KB)
Steps To Reproduce
Follow instructions in
This looks like a Jetson issue. Please refer to the below samples in case useful.
For any further assistance, we will move this post to to Jetson related forum.
I’m on amd64, not Jetson.
The minimum supported CUDA compute capability for TensorRT 8.6.1 is 6.0, and the GeForce GTX 860M has a CUDA compute capability of 5.0.
Could you please use TensorRT’s older version 8.5.
Also please make sure, Driver and CUDA you installed support the CUDA compute capability of 5.0.
Thank you. Downgrading TensorRT to 8.5.3 solved my problem.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.