Complete ERROR message :
RuntimeError: The detected CUDA version (12.0) mismatches the version that was used to compile PyTorch (11.4). Please make sure to use the same CUDA versions.
Hi Team,
To solved this issue we installed cuda-12.0 from the below link but in nvcc --version cuda is still showing as 11.4.
Also tried exporting the downloaded updated cuda location, still the nvcc version shows as 11.4, Please check and advice how to proceed from here:
export CUDA_HOME=/usr/local/cuda-12.0/
Hi ,
We recommend you to check the supported features from the below link.
These support matrices provide a look into the supported platforms, features, and hardware capabilities of the NVIDIA TensorRT 8.6.1 APIs, parsers, and layers.
You can refer below link for all the supported operators list.
For unsupported operators, you need to create a custom plugin to support the operation
<!--- SPDX-License-Identifier: Apache-2.0 -->
# Supported ONNX Operators
TensorRT 8.6 supports operators up to Opset 17. Latest information of ONNX operators can be found [here](https://github.com/onnx/onnx/blob/master/docs/Operators.md)
TensorRT supports the following ONNX data types: DOUBLE, FLOAT32, FLOAT16, INT8, and BOOL
> Note: There is limited support for INT32, INT64, and DOUBLE types. TensorRT will attempt to cast down INT64 to INT32 and DOUBLE down to FLOAT, clamping values to `+-INT_MAX` or `+-FLT_MAX` if necessary.
See below for the support matrix of ONNX operators in ONNX-TensorRT.
## Operator Support Matrix
| Operator | Supported | Supported Types | Restrictions |
|---------------------------|------------|-----------------|------------------------------------------------------------------------------------------------------------------------|
| Abs | Y | FP32, FP16, INT32 |
| Acos | Y | FP32, FP16 |
| Acosh | Y | FP32, FP16 |
| Add | Y | FP32, FP16, INT32 |
This file has been truncated. show original
Thanks!