I’m trying to make an introduction to tensorrt and my first goal is to import an ONNX model and save the engine. However I ran into the below problem:
File "onnx2_tensorrt.py", line 13, in <module> with builder.create_builder_config() as config, builder.build_cuda_engine(network,config) as engine: TypeError: build_cuda_engine(): incompatible function arguments. The following argument types are supported: 1. (self: tensorrt.tensorrt.Builder, network: tensorrt.tensorrt.INetworkDefinition) -> tensorrt.tensorrt.ICudaEngine Invoked with: <tensorrt.tensorrt.Builder object at 0x7f79ecd90bc8>, <tensorrt.tensorrt.INetworkDefinition object at 0x7f79ecd90c00>, <tensorrt.tensorrt.IBuilderConfig object at 0x7f79ecdb37a0>
My code is like:
import tensorrt as trt TRT_LOGGER = trt.Logger(trt.Logger.WARNING) model_path = 'export.onnx' max_batch_size = 32 builder = trt.Builder(TRT_LOGGER) with builder.create_network() as network, trt.OnnxParser(network, TRT_LOGGER) as parser: with open(model_path, 'rb') as model: parser.parse(model.read()) builder.max_batch_size = max_batch_size builder.max_workspace_size = 1 << 50 # This determines the amount of memory available to the builder when building an optimized engine and should generally be set as high as possible. with builder.create_builder_config() as config, builder.build_cuda_engine(network,config) as engine: with open('sample.engine', 'wb') as f: f.write(engine.serialize())
What’s the point I’m missing?
BTW, my system is
-CUDA 10.0 /CuDNN 7.4.2