I implemented custom plugin layers with fp32 mode, and it works well when running in FP16 mode on tesnorrt 4.0. However, it reports error when running on tensorrt 5.0.2. I am wondering if we need to implement fp16-supported version of plugin layer, or tensorrt will take care of the conversion for us since there’s supposed to be an implicit data conversion between plugin layer and built-in layer?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
How does TensorRT handle plugin layer with FP16 mode | 0 | 629 | April 29, 2019 | |
Does tensor rt 5 automatically enable tensor core for int8 and fp16 mode? | 6 | 1803 | April 26, 2019 | |
can we using INT8 if there is a customer/plugin layer? | 2 | 722 | December 11, 2017 | |
Tensorrt 7 - Best Practice for implementing plugin that supports both FP16 and Fp32 | 1 | 457 | August 16, 2022 | |
FP16 integration in custom API implementation | 0 | 623 | June 19, 2018 | |
TensorRT 5.1.6 Custom plugin with fp16 issue | 6 | 1795 | November 19, 2019 | |
TensorRT fp16 plugin | 4 | 2743 | August 23, 2017 | |
TensorRT 8.6 conversion of RTDETR custom model accuracy dropped at fp16 | 0 | 50 | July 26, 2024 | |
Is Plugin API supported by onnx parser? | 4 | 446 | October 12, 2021 | |
A qustion about custom layer plugin when i use onnx parser | 3 | 400 | April 6, 2021 |