I‘m work with TFTRT for tensorflow speed-up on GPU. it works well in some classification networks, such as resnet50V2, inceptionV2,VGG19. However, when I try to accelerate the obeject dectetion network like SSD, faster rcnn and mask rcnn, the INT8 networks processed by TFTRT do not gain any advantages in speed over the original network. I doubt that the TFTRT do not favor obeject dectetion network well. And I have two additional questions. Is there some limitaions of the TFTRT? Is there difference in performing INT8 optimzation between tensorRT and TFTRT ?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Deploy Object Detection TF-TRT INT8 with DS Triton | 16 | 1492 | October 12, 2021 | |
| TF-TRT Integration Feedback | 1 | 686 | December 8, 2020 | |
| No SpeedUp after TensorRT INT8 (PointNet ++ tensorflow model) | 6 | 1319 | February 25, 2020 | |
| TensorRT Integration Speeds Up TensorFlow Inference | 40 | 1201 | March 27, 2020 | |
| Deploying Deep Neural Networks with NVIDIA TensorRT | 17 | 871 | October 8, 2017 | |
| TensorRT INT8 with CPU only | 1 | 4218 | December 2, 2018 | |
| Inference time using TF-TRT is the same as Native Tensorflow for Object Detection Models | 4 | 1088 | March 31, 2022 | |
| No speed up with TensorRT FP16 or INT8 on NVIDIA V100 | 7 | 2936 | November 15, 2019 | |
| Tf v2 to trt in xavier no time improvement | 2 | 450 | October 18, 2021 | |
| No Speedup after Tensorrt6 INT8 | 1 | 606 | April 1, 2020 |