Warning: Skipping tactic 0 due to kernel generation error


I’m getting the following warnings when building an engine:

WARNING: Skipping tactic 0 due to kernel generation error.

As well as:

WARNING: Convolution + generic activation fusion is disable due to incompatible driver or nvrtc

I’m running on:

Ubuntu 18.04
CUDA 11.3
CuDNN 8.2.1
TensorRT 8.0.3
CUDA driver: 495.29.05

What is the problem? They look serious enough so I don’t think I should simply ignore them.

I do not have such a problem on CUDA 11.4 + TensorRT 8.2.2. However I cannot use TRT 8.2.2 due to a performance regression, so I’m stuck in TRT 8.0.3, which according to the documentation only supports up to CUDA 11.3, not CUDA 11.4.

Could you advice on a way forward?



Could you please share issue repro ONNX model.
Thank you.


Thanks for the reply. The model is proprietary so I cannot share it here, I will try to prepare a smaller sample.

Until then, can you give a hint as to what is happening? In general, error messages should be self-contained, descriptive and actionable for the users. The error messages above don’t tell me what the problem is.

Also, thanks for the hint about using containers. Unfortunately we cannot use them in our build environment, but we can take a look and see the exact version of all the dependencies.

One thing I notice is that the containers are violating your own requirements, for example:

This is based on TRT 8.0.3, and yet uses CUDA 11.4 (max 11.3 in the docs) and CuDNN 8.2.4 (max 8.2.1 in the docs).

Could you comment on that?


This may be happening because of CUDA version mismatch than requirement. Please try to change CUDA version according to support matrix.
Please find a similar issue here.

Thank you.


I am using the correct CUDA version according to support matrix. CUDA 11.3 should be supported.

I have read the post you mention but there the issue was with CUDA 11.1, solved updating to 11.3.

model.onnx (3.5 KB)

Attached you can find a sample ONNX model that reproduces the problem. Running:

./trtexec --onnx=model.onnx


[01/19/2022-12:09:34] [W] [TRT] Convolution + generic activation fusion is disable due to incompatible driver or nvrtc

Btw this is running on a Quadro RTX 4000 with Max-Q Design.


We are unable to reproduce this issue on latest TRT version.
Above error says same thing issue due to NVRTC or CUDA related error.
As mentioned in the release notes of 8.0.3 version, we fixed a known CUDA 11.4 NVRTC issue during kernel generation.

We recommend you to please upgrade CUDA version and try. Also we suggest to use latest TensorRT if possible.

Thank you.


The release notes you mention say that the issue apply on Windows, not Linux (which is where I’m running)

I have upgraded to CUDA 11.4 and the problem still persists, what version do you suggest to upgrade to?

As I already said, using latest TensorRT is not an option, because a large accuracy regression has been introduced in 8.2.2 that renders our model unusable.

Can you please also clarify why in the TensorRT NGC container version 21.10 (TensorRT 8.0.3) there are both NVRTC 11.3 and NVRTC 11.4 installed? How is the correct one picked?


I found the problem: TensorRT 8.0.3 is one of the very few versions that introduces a dependency on cuda-nvrtc for some reason:

Depends: libcudnn8, libcublas.so.11 | libcublas-11-1 | libcublas-11-0, libnvrtc.so.11.3 | libnvrtc.so.11.2 | cuda-nvrtc-11-1 | cuda-nvrtc-11-0

Normally, TensorRT only depends on libcudnn8 and libcublas.

After making sure that cuda-nvrtc is installed properly and accessible (via LD_LIBRARY_PATH, or RUNPATH) the errors go away.

This dependency on NVRTC does not exist in TensorRT 8.2.x.

Depends: libcudnn8, libcublas.so.11 | libcublas-11-1 | libcublas-11-0