Error: Cuda Runtime (no kernel image is available for execution on the device)

Description

I can’t build TRT Engine for ONNX object detection model taken from TensorRT samples in /usr/src/tensorrt/samples/python/tensorflow_object_detection_api.

trtexec fails with Cuda Runtime (no kernel image is available for execution on the device)

The error and logs are the same no matter how produced: via trtexec utility or python code in build_engine.py.

Environment

TensorRT Version: 8.6.1
GPU Type: GeForce GTX 860M
Nvidia Driver Version: 535.86.05
CUDA Version: 11.8
CUDNN Version: 8.9.0
Operating System + Version: Ubuntu 20.04
Python Version: 3.8.10

Relevant Files

deviceQuery.txt (2.4 KB)
nvidia-smi.txt (1.8 KB)
trtexec.txt (418.6 KB)

Steps To Reproduce

Follow instructions in /usr/src/tensorrt/samples/python/tensorflow_object_detection_api/README.md.

I’m on amd64, not Jetson.

Hi,

The minimum supported CUDA compute capability for TensorRT 8.6.1 is 6.0, and the GeForce GTX 860M has a CUDA compute capability of 5.0.

Could you please use TensorRT’s older version 8.5.
Also please make sure, Driver and CUDA you installed support the CUDA compute capability of 5.0.

Thank you.

Thank you. Downgrading TensorRT to 8.5.3 solved my problem.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.