Description
I’m using both TRT 8.2.0.6 Python and C++ versions to inference my onnx model.
While I’m getting to activate the serialization operation of the optimized model, only C++ version is successfully completed while the Python raise an exception - ‘NoneType’ object has no attribute ‘serialize’
Environment
TensorRT Version: 8.2.0.6
GPU Type: GeForce GTX 108
Nvidia Driver Version: 460.32.03
CUDA Version: 11.2
CUDNN Version: 8.2.1
Operating System + Version: Linux Ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): Baremetal
Relevant Files
My model was attched in other topic - Onnx model
Steps To Reproduce
type or paste code here
``` self.logger = trt.Logger(trt.Logger.INTERNAL_ERROR)
self.builder = trt.Builder(self.logger)
self.builder.max_batch_size = maxBatchSize
self.config = self.builder.create_builder_config()
self.EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
self.network = self.builder.create_network(self.EXPLICIT_BATCH)
self.parser = trt.OnnxParser(self.network, self.logger)
self.runtime = trt.Runtime(self.logger)
self.model = model
self.parser.parse_from_file(self.model )
self.engine = self.builder.build_engine(self.network, self.config)
self.serializedEngine = self.engine.serialize()