Description
Is there any solution to get total used gpu memory and set gpu memory limit when tensorrt inferring? Like torch:
torch.cuda.memory_allocated()
torch.cuda.set_per_process_memory_fraction()
Environment
TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):