GPU memory leak - With dynamic batch models

We are experiencing a GPU memory leak (not consistently reproducible) when using triton server with dynamic batch model.

OS: Ubuntu 18.04
GPU: GTX 1080Ti
Triton Server Version: 1.10.0

Could it be related to context->setBindingDimensions casing gpu memory leak?
src/backends/tensorrt/ call to setBindingDimensions
src/backends/tensorrt/ call to enqueueV2