GPU memory usage issue while using TensorFlow

Hi,

Operating System - Ubuntu 18.04 Desktop
Cuda - 10.1
Nvidia - 418.116
Tensorflow GPU- 1.14/1.15

We are running a python service using tensorflow-gpu 1.14/1.15 with two Tesla T4 GPU card. While running the service with Cuda 10.1 the gpu memory uses not more than 99mb for the python service as attached in the screenshot below.

I want a solution to increase the gpu memory usage as there are two T4 gpus available in the server. Please help as early as possible