Hi, I am implementing plugin layers with support of fp16 mode. However, I can’t find any documentation that describes how tensorrt handle plugin layers in FP16 mode. Does it convert FP16 to FP32 before and after the plugin layer? If this is the case, does it mean we only need to write FP32 version?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Implement Plugin Layer with support of FP16 mode | 0 | 1046 | April 26, 2019 | |
| Does tensor rt 5 automatically enable tensor core for int8 and fp16 mode? | 6 | 1935 | April 26, 2019 | |
| TensorRT fp16 plugin | 4 | 2847 | August 23, 2017 | |
| TENSORRT Model using FP16 Plugins and Kernels | 4 | 1133 | April 26, 2019 | |
| TensorRT stuck on tuning plugin in FP16 mode | 1 | 444 | October 22, 2022 | |
| Which layers of TensorRT will work in fp16 mode when enable the --half2 option? | 2 | 586 | October 18, 2021 | |
| How does FP32 to FP16 conversion work in FP16 mode? | 0 | 860 | February 18, 2022 | |
| Tensorrt 7 - Best Practice for implementing plugin that supports both FP16 and Fp32 | 1 | 522 | August 16, 2022 | |
| TensorRT 5.1.6 Custom plugin with fp16 issue | 6 | 1904 | November 19, 2019 | |
| which layers of TensorRT will work in fp16 mode when enable the --half2 option? | 1 | 1043 | March 17, 2017 |