I implemented custom plugin layers with fp32 mode, and it works well when running in FP16 mode on tesnorrt 4.0. However, it reports error when running on tensorrt 5.0.2. I am wondering if we need to implement fp16-supported version of plugin layer, or tensorrt will take care of the conversion for us since there’s supposed to be an implicit data conversion between plugin layer and built-in layer?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| How does TensorRT handle plugin layer with FP16 mode | 0 | 659 | April 29, 2019 | |
| TensorRT 5.1.6 Custom plugin with fp16 issue | 6 | 1889 | November 19, 2019 | |
| Tensorrt 7 - Best Practice for implementing plugin that supports both FP16 and Fp32 | 1 | 520 | August 16, 2022 | |
| TensorRT fp16 plugin | 4 | 2842 | August 23, 2017 | |
| Does tensor rt 5 automatically enable tensor core for int8 and fp16 mode? | 6 | 1931 | April 26, 2019 | |
| TensorRT stuck on tuning plugin in FP16 mode | 1 | 441 | October 22, 2022 | |
| TENSORRT Model using FP16 Plugins and Kernels | 4 | 1128 | April 26, 2019 | |
| can we using INT8 if there is a customer/plugin layer? | 2 | 782 | December 11, 2017 | |
| which layers of TensorRT will work in fp16 mode when enable the --half2 option? | 1 | 1041 | March 17, 2017 | |
| Which layers of TensorRT will work in fp16 mode when enable the --half2 option? | 2 | 585 | October 18, 2021 |