lqs1
February 23, 2023, 3:12am
1
Description
For example, I’m in official 22.12 docker . I want to upgrade TensorRT to 8.5.2.2 like official 23.01 docker . Is there anyway except run another 23.01 docker?
I want to do this because since 23.01 docker, the cuda toolkit version is 12.0, which causes host with cuda driver 11.x incompatible. For example, I have a host with cuda driver 11.x, when I run into docker 23.01, the pytorch is not working and reports cuda incompatible error. Since this host machine is not my personal, I don’t have access to upgrade the cuda driver…
NVES
February 23, 2023, 3:37am
2
Hi ,
We recommend you to check the supported features from the below link.
These support matrices provide a look into the supported platforms, features, and hardware capabilities of the NVIDIA TensorRT 8.5.3 APIs, parsers, and layers.
You can refer below link for all the supported operators list.
For unsupported operators, you need to create a custom plugin to support the operation
<!--- SPDX-License-Identifier: Apache-2.0 -->
# Supported ONNX Operators
TensorRT 8.5 supports operators up to Opset 17. Latest information of ONNX operators can be found [here](https://github.com/onnx/onnx/blob/master/docs/Operators.md)
TensorRT supports the following ONNX data types: DOUBLE, FLOAT32, FLOAT16, INT8, and BOOL
> Note: There is limited support for INT32, INT64, and DOUBLE types. TensorRT will attempt to cast down INT64 to INT32 and DOUBLE down to FLOAT, clamping values to `+-INT_MAX` or `+-FLT_MAX` if necessary.
See below for the support matrix of ONNX operators in ONNX-TensorRT.
## Operator Support Matrix
| Operator | Supported | Supported Types | Restrictions |
|---------------------------|------------|-----------------|------------------------------------------------------------------------------------------------------------------------|
| Abs | Y | FP32, FP16, INT32 |
| Acos | Y | FP32, FP16 |
| Acosh | Y | FP32, FP16 |
| Add | Y | FP32, FP16, INT32 |
This file has been truncated. show original
Thanks!