Description
A clear and concise description of the bug or issue.
Environment
TensorRT Version : 6.0.1
GPU Type : jetson-tx2
CUDA Version : 10.0
Operating System + Version : ubuntu 18.04
Python Version (if applicable) : 3.6
My tensorflow model has some operations that tensorrt does not support, so I use graphsurgeon to replace it. I want to know where to see what operations tensorrt supports
Hi,
Please refer to below link :
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-700/tensorrt-api/python_api/uff/Operators.html
Also, we are deprecating Caffe Parser and UFF Parser in TensorRT 7. Will recommend to user ONNX parser for TRT engine model generation.
# Supported ONNX Operators
TensorRT 7.0 supports operators up to Opset 11. Latest information of ONNX operators can be found [here](https://github.com/onnx/onnx/blob/master/docs/Operators.md)
TensorRT supports the following ONNX data types: FLOAT32, FLOAT16, INT8, and BOOL
\*There is limited support for INT32 and INT64 types. TensorRT will attempt to cast down INT64 to INT32 where possible. If not possible, TensorRT will throw an error. See the [TensorRT layer support matrix](https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#layers-precision-matrix) for more information on data type support.
## Operator Support Matrix
| Operator | Supported? | Restrictions |
|-----------------------|------------|----------------------------------------------------------------------------------------------------------------------------------------|
| Abs | Y |
| Acos | Y |
| Acosh | Y |
| Add | Y |
| And | Y |
| ArgMax | Y |
| ArgMin | Y |
| Asin | Y |
This file has been truncated. show original
Thanks
Thank you for your quick reply, which is very helpful to me