Memory leak in TensorRT InstanceNormalization

Hi,

I am converting a Pytorch model into onnx and then into trt for inference. My PyTorch model contains InstanceNormalization, whenever I replace InstanceNormalization to BatchNormalization then there is no memory leak.

When I iteratively inference the trt model containing InstanceNormalization then the GPU memory allocation increases at every iteration of inference.

When I use BatchNormalization, then there is no such effect of memory leaks.

Thanks.

Hi,

Can you provide the following information so we can better help?

Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version

Also, please share the scripts / model file to reproduce the issue.

  1. Please make a couple simple/dummy models containing just one or two layers, one with batchnorm and one with instancenorm ops that reproduce the issue rather than a larger complicated model.
  2. Please share the script you’re using to do inference on these models iteratively

Hi,

The details are as follows:

Linux distro and version - Ubuntu 16.04
GPU type - RTX 2080Ti
Nvidia driver version - 435.21
CUDA version - 10.0
CUDNN version - 7.6.5
Python version [if using python] - 3.7.5
PyTorch version - Pytorch 1.3.1
TensorRT version - 7.0.0.11

The script and the model that you have asked for is uploaded and you can download it from the following link.
https://we.tl/t-hS8nE7RfXr

Thanks.

Hi,

Sorry for the delayed response. I believe this was fixed in the opensource components here: https://github.com/NVIDIA/TensorRT/pull/315/files

You can build the OSS components on top of your TensorRT install following the instructions in the README here: https://github.com/NVIDIA/TensorRT

There’s an example script to automate this here: https://github.com/rmccorm4/tensorrt-utils/blob/master/OSS/build_OSS.sh