buildEngineWithConfig returns null_ptr

Environment

TensorRT Version : TensorRT-7.2.3.4
GPU Type :
Nvidia Driver Version :
CUDA Version :
CUDNN Version :
Operating System + Version : Windows10
Python Version (if applicable) : 3.7
TensorFlow Version (if applicable) : 2.3.1
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :

I created a model with Tensorflow. Afterwards I used tf2onnx to create a .onnx-model. Now, I want to use the TensorRT-C+±API in the IDE QtCreator.

This is my code:
IBuilder* builder = createInferBuilder(sample::gLogger);
const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
INetworkDefinition* network = builder->createNetworkV2(explicitBatch);
nvonnxparser::IParser* parser = nvonnxparser::createParser(network, sample::gLogger);
QString file = “path/to/model.onnx”;
parser->parseFromFile(file.toLocal8Bit().constData(), static_cast(ILogger::Severity::kVERBOSE));
IBuilderConfig
config = builder->createBuilderConfig();
config->setMaxWorkspaceSize(1 << 20);
ICudaEngine* engine = builder1->buildEngineWithConfig(*network, *config);
if (!engine) {
qDebug() << “FAILED”;
}

Unfortunately, engine is a null_ptr. I do not know how to debug buildEngineWithConfig(…) and I there are not some error messages in the QtCreator console. How do I have access to the logger information?

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

When I run this command:

trtexec.exe --onnx=path/to/model.onnx --verbose

I receive the following results:

&&&& PASSED TensorRT.trtexec # trtexec.exe --onnx=path/to/model.onnx --verbose

So, everything is right, isn’t it? Also I can save the serialized engine.
But in the code it does not working.

Hi @OpDaSo_B,

We recommend you to share complete error logs and issue repro inference script/model for better assistance.

Thank you.

In the attached file is the output of the command: trtexec.exe --onnx=path/to/model.onnx --verbose
log.txt (883.8 KB)

@OpDaSo_B,

Looks like you missed to share error logs. We request you to please share output of your code mentioned in the post for better debugging.

Thank you.

This is the error log:

Input filename: path/to/model.onnx
ONNX IR version: 0.0.4
Opset version: 9
Producer name: tf2onnx
Producer version: 1.9.0
Domain:
Model version: 0
Doc string:

[07/01/2021-14:04:39] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
----- Parsing of ONNX model path/to/model.onnx is Done ----
[07/01/2021-14:04:39] [E] [TRT] Network has dynamic or shape inputs, but no optimization profile has been defined.
[07/01/2021-14:04:39] [E] [TRT] Network validation failed.

@OpDaSo_B,

As you mentioned you’re facing issue with below code, request you to share output logs when you run this code.

IBuilder* builder = createInferBuilder(sample::gLogger);
const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
INetworkDefinition* network = builder->createNetworkV2(explicitBatch);
nvonnxparser::IParser* parser = nvonnxparser::createParser( network, sample::gLogger);
QString file = “path/to/model.onnx”;
parser->parseFromFile(file.toLocal8Bit().constData(), static_cast(ILogger::Severity::kVERBOSE));
IBuilderConfig
config = builder->createBuilderConfig();
config->setMaxWorkspaceSize(1 << 20);
ICudaEngine* engine = builder1->buildEngineWithConfig(*network, *config);
if (!engine) {
qDebug() << “FAILED”;
}

The above code was written in Qt Creator with no console output. Therefore, I do not have an output log.
Futher, I create a Visual Studio project to receive the conosle output.

This is the code in VS:
// Create the builder and network.
IBuilder* builder = createInferBuilder(sample::gLogger);
const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
INetworkDefinition* network = builder->createNetworkV2(explicitBatch);
// Create the ONNX parser:
nvonnxparser::IParser* parser = nvonnxparser::createParser(network, sample::gLogger);
// Parse the model:
std::string file = “path/to/model.onnx”;
try {
parser->parseFromFile(file.c_str(), static_cast(ILogger::Severity::kVERBOSE));
}
catch (const std::exception& ex) {
std::cout << ex.what();
}
// Build the engine using the builder object:
IBuilderConfig
config = builder->createBuilderConfig();
try {
ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);
if (!engine)
{
cout << “failed”;
}
}
catch (const std::exception& ex) {
std::cout << ex.what();
}

And this is the output in the console:

Input filename: path/to/model.onnx
ONNX IR version: 0.0.4
Opset version: 9
Producer name: tf2onnx
Producer version: 1.9.0
Domain:
Model version: 0
Doc string:

[07/01/2021-14:04:39] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
----- Parsing of ONNX model path/to/model.onnx is Done ----
[07/01/2021-14:04:39] [E] [TRT] Network has dynamic or shape inputs, but no optimization profile has been defined.
[07/01/2021-14:04:39] [E] [TRT] Network validation failed.
failed

@OpDaSo_B,

You have to use optimization profile.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#work_dynamic_shapes
sample is here.

Thank you.