TensorRT Version : TensorRT-7.2.3.4 GPU Type : Nvidia Driver Version : CUDA Version : CUDNN Version : Operating System + Version : Windows10 Python Version (if applicable) : 3.7 TensorFlow Version (if applicable) : 2.3.1 PyTorch Version (if applicable) : Baremetal or Container (if container which image + tag) :
I created a model with Tensorflow. Afterwards I used tf2onnx to create a .onnx-model. Now, I want to use the TensorRT-C+±API in the IDE QtCreator.
This is my code:
IBuilder* builder = createInferBuilder(sample::gLogger);
const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
INetworkDefinition* network = builder->createNetworkV2(explicitBatch);
nvonnxparser::IParser* parser = nvonnxparser::createParser(network, sample::gLogger);
QString file = “path/to/model.onnx”;
parser->parseFromFile(file.toLocal8Bit().constData(), static_cast(ILogger::Severity::kVERBOSE));
IBuilderConfig config = builder->createBuilderConfig();
config->setMaxWorkspaceSize(1 << 20);
ICudaEngine* engine = builder1->buildEngineWithConfig(*network, *config);
if (!engine) {
qDebug() << “FAILED”;
}
Unfortunately, engine is a null_ptr. I do not know how to debug buildEngineWithConfig(…) and I there are not some error messages in the QtCreator console. How do I have access to the logger information?
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command. https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
[07/01/2021-14:04:39] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
----- Parsing of ONNX model path/to/model.onnx is Done ----
[07/01/2021-14:04:39] [E] [TRT] Network has dynamic or shape inputs, but no optimization profile has been defined.
[07/01/2021-14:04:39] [E] [TRT] Network validation failed.
The above code was written in Qt Creator with no console output. Therefore, I do not have an output log.
Futher, I create a Visual Studio project to receive the conosle output.
This is the code in VS:
// Create the builder and network.
IBuilder* builder = createInferBuilder(sample::gLogger);
const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
INetworkDefinition* network = builder->createNetworkV2(explicitBatch);
// Create the ONNX parser:
nvonnxparser::IParser* parser = nvonnxparser::createParser(network, sample::gLogger);
// Parse the model:
std::string file = “path/to/model.onnx”;
try {
parser->parseFromFile(file.c_str(), static_cast(ILogger::Severity::kVERBOSE));
}
catch (const std::exception& ex) {
std::cout << ex.what();
}
// Build the engine using the builder object:
IBuilderConfig config = builder->createBuilderConfig();
try {
ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);
if (!engine)
{
cout << “failed”;
}
}
catch (const std::exception& ex) {
std::cout << ex.what();
}
[07/01/2021-14:04:39] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
----- Parsing of ONNX model path/to/model.onnx is Done ----
[07/01/2021-14:04:39] [E] [TRT] Network has dynamic or shape inputs, but no optimization profile has been defined.
[07/01/2021-14:04:39] [E] [TRT] Network validation failed.
failed