createInferRuntime in dll get stuck forever


I have the following two lines of code in my dll, but found that when running to the second line, it blocks there forever and never returns,how can i fix this?

static std::unique_ptr<Logger> s_logger = std::make_unique<Logger>();
static std::unique_ptr<nvinfer1::IRuntime> s_runtime = std::unique_ptr<nvinfer1::IRuntime>(nvinfer1::createInferRuntime(*s_logger));


TensorRT Version:
GPU Type: dGPU
Nvidia Driver Version: 531
CUDA Version: 11.6
CUDNN Version: 8.4
Operating System + Version: windows10
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Please check the below links, as they might answer your concerns.


Sorry, I’m afraid the above documentation isn’t much help.
I’ve submitted an attached document that reproduces the problem, please take a look at it (21.0 MB)

Heeeeelp, anybody any idea?

Could you please pass a simple Logger object here and check if it works.
Example: Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

Thank you for your reply.

I changed the Logger to the simplest form as you suggested, but the result is the same, it keeps blocking the creation of the Runtime!

Here are some experiments I did.

  1. change tensorrt to version 8.6, it doesn’t work.
  2. move the runtime initialization process from a dll library to an exe executable, it works.
  3. port the above code to ubuntu, it works.

Any other suggestions?