How to make a bigger speedup in TRTIS?

I have successfully run the inception-plan model in TRTIS(Inference Server Release 19.02-py3, TensorRT5.02 + GPU Tesla P40), and my model_repository is as below
plan_inception/
├── 1
│ └── model.plan
├── config.pbtxt
└── inception_labels.txt
we get the result as:
batch size = 1, and result time is 0.0093s
batch size = 10, and result time is 0.036s
batch size = 100, and result time is 0.35s

As a comparison,I run the inception model in tensorflow (GPU Tesla P40), and we get the result as:
batch size = 1, and result time is 0.0168s
batch size = 10, and result time is 0.046s
batch size = 100, and result time is 0.36s

From above, we can know the inception-plan model in TRTIS accelerated limited compared with the tensorflow model.
Could anyone give me some advices for that? Thank you.