How to release all gpu memory after saving built engine?

Description

how to release all gpu memory after saving built engine ? I have destoried builder ,parser,giemodel and engine ,but there are about 1GB gpu memory has not released? how can I releaseit?

Environment

TensorRT Version: 8.2.4.15
GPU Type: gtx1660
Nvidia Driver Version: 511.23
CUDA Version: 11.6
CUDNN Version: 8
Operating System + Version: windows10
Python Version (if applicable): 3.8
PyTorch Version (if applicable): 1.9.1

Hi,

Could you please share with us a minimal issue repro scrip and model for better debugging.

Other allocations like from libraries (Cuda Context, cudnn, cublas, etc.). They are not avoidable.
After cublas, cudnn loaded, some host and device memory are reserved and will not be released until you dlclose the library or exit the process (some might be for GPU kernels, some might be for kernel management in host …).

For cuda context, cudnn and cublas, the memory allocation start from your first API call to the library, and released only after you unload this library, there is no explicit API to release these memories.

Thank you.