InferDeleter crash for dynamic netowrk


After changing my network from static to dynamic, the program crash at inferDeleter when destroying the objects. I got the following assertion

[04/28/2021-10:27:22] [F] [TRT] Assertion failed: !mProfiles.empty() || mIsAllDimensionsStatic

What do I need to do before destroying the objects?


TensorRT Version: TensorRT-
GPU Type: NVIDIA GeForce GTX 1660 Ti with Max-Q Design
Nvidia Driver Version:
CUDA Version: 11
CUDNN Version: 8.2
Operating System + Version: Windows 10
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @oelgendy1,

Could you please share the script, model and log files so we can help better?


1 Like

This is the onnx model
UNetModel.onnx (1.8 MB)

And this is the script for building the network:

context = SampleUniquePtr<nvinfer1::IExecutionContext>(mEngine->createExecutionContext());    

auto builder = makeUnique(nvinfer1::createInferBuilder(sample::gLogger.getTRTLogger()));
if (!builder) return false;

const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
auto network = makeUnique(builder->createNetworkV2(explicitBatch));
if (!network) return false; 

auto config = makeUnique(builder->createBuilderConfig());
if (!config) return false;

auto parser = makeUnique(nvonnxparser::createParser(*network, sample::gLogger.getTRTLogger()));
if (!parser) return false; 

parsed = parser->parseFromFile("UNetModel.onnx", static_cast<int>(static_cast<int>(gLogger.getReportableSeverity())));
if(!parsed) return false;


auto profile = builder->createOptimizationProfile();
const auto inputName = network->getInput(0)->getName();
profile->setDimensions(inputName, OptProfileSelector::kMIN, Dims4{ 1, 1, 2048, 1280 });
profile->setDimensions(inputName, OptProfileSelector::kOPT, Dims4{ 1, 1, 2080, 2080 });
profile->setDimensions(inputName, OptProfileSelector::kMAX, Dims4{ 1, 1, 2080, 2304 });

mEngine = makeUnique(builder->buildEngineWithConfig(*network, *config)); 
if (!mEngine) return false;

And this is the inference script

context->setBindingDimensions(0, Dims4{ 1, 1, height, width });
if (!context->allInputDimensionsSpecified()) return false;
buf[0] = (float_t*)halide_cuda_get_device_ptr(NULL, inputBuffer);
buf[1] = (float_t*)halide_cuda_get_device_ptr(NULL, outputBuffer);
if (!context->executeV2(buf)) return false;

Network is built successfully, Inference is done successfully. The crash happens at destroying the objects.

I fixed this error by destroying the context first using


However, the program still crashes:
Exception thrown at 0x0000000000000000 in application.exe: 0xC0000005: Access violation executing location 0x0000000000000000.

Is it because the GPU memory is not explicitly handled by TensorRT?

My program is multi-threaded, but all neural network operations are done in single thread

Please refer to below link regarding thread safety best practice while using TRT:


Thanks for your reply @SunilJB . Yes, all variables are declared and used in a single thread. Other threads are totally independent.

Hi @oelgendy1,

We request you to please share full code. Piece of which you have shared doesn’t have much information.

Thank you.