Description
I use tensorrt API to converse ONNX model to tensorrt engine model, it takes much time, so I want to disable the API to print building message to terminal ,how to set the API function?
Environment
TensorRT Version: v8.0.1.6
GPU Type: RTX3060
Nvidia Driver Version: V511
CUDA Version: 11.3
CUDNN Version: 8.2
Operating System + Version: ubuntu20.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Function
/// Parse onnx file and create a TRT engine
nvinfer1::ICudaEngine *createCudaEngine(const std::string
&onnxFileName, nvinfer1::ILogger &logger)
{
using namespace std;
using namespace nvinfer1;
// unique_ptr<IBuilder, Destroy<IBuilder>>
builder{createInferBuilder(logger)};
unique_ptr<IBuilder, Destroy<IBuilder>>
builder(createInferBuilder(logger)); // use () also ok
unique_ptr<INetworkDefinition, Destroy<INetworkDefinition>>
network{
builder->createNetworkV2(1U <<
(unsigned)NetworkDefinitionCreationFlag::kEXPLICIT_BATCH)};
unique_ptr<nvonnxparser::IParser, Destroy<nvonnxparser::IParser>>
parser{ nvonnxparser::createParser(*network, logger)};
if (!parser->parseFromFile(onnxFileName.c_str(), static_cast<int>
(ILogger::Severity::kINFO)))
throw runtime_error("ERROR: could not parse ONNX model " +
onnxFileName + " !");
// Modern version with config
unique_ptr<IBuilderConfig, Destroy<IBuilderConfig>> config(builder-
>createBuilderConfig());
// This is needed for TensorRT 6, not needed by 7 !
config->setMaxWorkspaceSize(64 * 1024 * 1024);
return builder->buildEngineWithConfig(*network, *config);
}
main(){
myLogger logger;
std::string onnx_filepath = "./model/resnet.onnx";
unique_ptr<ICudaEngine, Destroy<ICudaEngine>>
engine(createCudaEngine(onnx_filepath, logger));
}
Does Tensorrt API provide such options to disable verbose message?