I obtained the same inference time with my optimized model (sometimes slower the baseline model) using the Tensorflow TensorRT API.
I’ve included a set of two tests on both SSD Resnet640x640 and EfficientDetD0.
Hello!
Thanks for your reply, I’ve already seen the samples. The TF-TRT integration is working fine.
I’d like to know if TF-TRT doesn’t support optimization for those models, or what might be the root cause of the non inference speed-up compared to the example samples provided.
Thanks once again!
Sorry for the delayed response. Yes, it’s supported.
If the issue still persists, We recommend you to reach out to the Tensorflow related forum to get better help.