How to destroy TensorRT instances?


I am writing C++ inference application using TensorRT.
Some functions, such as createInferRuntime() or deserializeCudaEngine(), return pointers.
However, there is no description if we need call delete explicitly or not for each function/method, while user guide shows delete finalization on some objects.

How should I destroy a object that is returned by TensorRT functions?


This question is about programming model, so that I do not fill these except TensorRT version.

TensorRT Version: 8.2

GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered


We can just delete the object. Or we can use a smart pointer to do the work.

Thank you.