How to disable ouput of verbose message when creating trt model from onnx model


I use tensorrt API to converse ONNX model to tensorrt engine model, it takes much time, so I want to disable the API to print building message to terminal ,how to set the API function?


TensorRT Version: v8.0.1.6
GPU Type: RTX3060
Nvidia Driver Version: V511
CUDA Version: 11.3
CUDNN Version: 8.2
Operating System + Version: ubuntu20.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):


   /// Parse onnx file and create a TRT engine
  nvinfer1::ICudaEngine *createCudaEngine(const std::string  
  &onnxFileName, nvinfer1::ILogger &logger)
     using namespace std;
     using namespace nvinfer1;

    // unique_ptr<IBuilder, Destroy<IBuilder>> 
    unique_ptr<IBuilder, Destroy<IBuilder>> 
    builder(createInferBuilder(logger)); // use () also ok
    unique_ptr<INetworkDefinition, Destroy<INetworkDefinition>> 
    builder->createNetworkV2(1U << 
    unique_ptr<nvonnxparser::IParser, Destroy<nvonnxparser::IParser>> 
    parser{ nvonnxparser::createParser(*network, logger)};

    if (!parser->parseFromFile(onnxFileName.c_str(), static_cast<int> 
       throw runtime_error("ERROR: could not parse ONNX model " + 
        onnxFileName + " !");

    // Modern version with config
    unique_ptr<IBuilderConfig, Destroy<IBuilderConfig>> config(builder- 
   // This is needed for TensorRT 6, not needed by 7 !
   config->setMaxWorkspaceSize(64 * 1024 * 1024);
   return builder->buildEngineWithConfig(*network, *config); 
     myLogger logger;
      std::string onnx_filepath = "./model/resnet.onnx";
      unique_ptr<ICudaEngine, Destroy<ICudaEngine>> 
      engine(createCudaEngine(onnx_filepath, logger));

Does Tensorrt API provide such options to disable verbose message?


Please refer to the following.

Thank you.

it is helpful

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.