Hi, I am implementing plugin layers with support of fp16 mode. However, I can’t find any documentation that describes how tensorrt handle plugin layers in FP16 mode. Does it convert FP16 to FP32 before and after the plugin layer? If this is the case, does it mean we only need to write FP32 version?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Implement Plugin Layer with support of FP16 mode | 0 | 1017 | April 26, 2019 | |
Does tensor rt 5 automatically enable tensor core for int8 and fp16 mode? | 6 | 1803 | April 26, 2019 | |
TensorRT FP16 model creation | 1 | 745 | June 1, 2018 | |
FP16 integration in custom API implementation | 0 | 623 | June 19, 2018 | |
can we use INT8 WITH plugin layer in tensorrt 4.0? | 1 | 486 | October 29, 2018 | |
How does FP32 to FP16 conversion work in FP16 mode? | 0 | 834 | February 18, 2022 | |
can we using INT8 if there is a customer/plugin layer? | 2 | 723 | December 11, 2017 | |
TemsorRT Fp16 mode | 6 | 1269 | October 18, 2021 | |
Is DataType::kHALF deprecated? | 0 | 558 | November 11, 2020 | |
which layers of TensorRT will work in fp16 mode when enable the --half2 option? | 1 | 1004 | March 17, 2017 |