Any ways to debug `IBuilder::buildEngineWithConfig` errors?


I’m trying to use TF->ONNX->TensorRT engine to build models for TensorRT.
My model has custom ops and hacked ONNX Parser to parse the INetworkDefinition correctly.
However, the call to IBuilder::buildEngineWithConfig seems returning nullptr and could not find useful debug information for that.

Any ways to debug this? And know what went wrong during IBuilder call?

NOTES: I provided a custom plugin to TensorRT, using both REGISTER_TENSORRT_PLUGIN and custom register function.


TensorRT Version: 7.0.0
GPU Type: 2080Ti
Nvidia Driver Version: 440
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: Ubuntu 16.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Hi @golden0080gba,
Request you to share the logs and script and model if possible, so that we can assist you better.


Hi @AakankshaS
I managed to make it work by using a larger workspace_size when building engines. But I wish the function not failing silently, at least some logging would be great.

1 Like

Hi @golden0080gba,
Request you to share verbose logs as usually there is a warning that appears when there is an issue with workspace size.


@AakankshaS unfortunately in my use case, this is nothing printed from the TensorRT logger I passed to IBuilder function.
The only thing I can tell is the function return nullptr, without any logs.
However, when I could successfully build engines, it prints out something like:

[TensorRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output
[TensorRT]: Detected 2 inputs and 3 output network tensors.

Note that I set the serverity to INFO for my loggers.