GPU memory leak - With dynamic batch models

Hi,
We are experiencing a GPU memory leak (not consistently reproducible) when using triton server with dynamic batch model.

OS: Ubuntu 18.04
GPU: GTX 1080Ti
Triton Server Version: 1.10.0

Could it be related to context->setBindingDimensions casing gpu memory leak?
src/backends/tensorrt/plan_backend.cc:967 call to setBindingDimensions
src/backends/tensorrt/plan_backend.cc:1025 call to enqueueV2