Error: 'tensorrt.tensorrt.Builder' object has no attribute 'max_workspace_size'

Hi

I have below code:

def PrepareEngine():
	with trt.Builder(TRT_LOGGER) as builder, builder.create_network(EXPLICIT_BATCH) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
		builder.max_workspace_size = 1 << 30
		with open('best.onnx', 'rb') as model:
			if not parser.parse(model.read()):
				print ('ERROR: Failed to parse the ONNX file.')
				for error in range(parser.num_errors):
					print (parser.get_error(error))
		engine = builder.build_cuda_engine(network)

and I am getting an error of

AttributeError: 'tensorrt.tensorrt.Builder' object has no attribute 'max_workspace_size' Can anyone please help me with this. Below are the details of my hardware:

Jetson Xavier NX
TensorRT version: ‘8.0.1.6’
Jetpack 4.6
Cuda Version: 10.2.300
cuDNN version: 8.2.1.32
Cuda Arch: 7.2
Python: 3.6

I am trying to run the code from here: Jetson/L4T/TRT Customized Example - eLinux.org . I have my custom yolov7 model and have converted it into onnx format. Please help. THanks

Hi,

We are moving this post to the Jetson Xavier NX forum to get better help.

Thank you.

Hi,

The sample is originally tested on JetPack 4.5.1 and there are some API changes in TensorRT 8.

We have updated the sample to be compatible with the latest TensorRT 8.5 in JetaPack 5.1.
Please check it again.

https://elinux.org/Jetson/L4T/TRT_Customized_Example#OpenCV_with_ONNX_model

Thanks.

Hi @AastaLLL

I have tried to run the code again. But getting below error now:

Traceback (most recent call last):
  File "app4.py", line 74, in <module>
	engine = PrepareEngine()
  File "app4.py", line 42, in PrepareEngine
	config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, 1 << 30)
AttributeError: 'tensorrt.tensorrt.IBuilderConfig' object has no attribute 'set_memory_pool_limit'

I am using yolov7-tiny model. I have converted it into onnx and then using it with this script. I have also executed it with trtexec and it worked fine with no issues and generated the .engine file.

I thought of then using this .engine file in another code which was present in the link you posted Jetson/L4T/TRT Customized Example - eLinux.org but got below error:

[TensorRT] INFO: [MemUsageChange] Init CUDA: CPU +346, GPU +0, now: CPU 435, GPU 6661 (MiB)
[TensorRT] INFO: Loaded engine size: 53 MB
[TensorRT] INFO: [MemUsageSnapshot] deserializeCudaEngine begin: CPU 435 MiB, GPU 6661 MiB
[TensorRT] ERROR: 3: getPluginCreator could not find plugin: EfficientNMS_TRT version: 1
[TensorRT] ERROR: 1: [pluginV2Runner.cpp::load::292] Error Code 1: Serialization (Serialization assertion creator failed.Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
[TensorRT] ERROR: 4: [runtime.cpp::deserializeCudaEngine::76] Error Code 4: Internal Error (Engine deserialization failed.)
Traceback (most recent call last):
  File "app3.py", line 63, in <module>
	engine = PrepareEngine()
  File "app3.py", line 46, in PrepareEngine
	for binding in engine:
TypeError: 'NoneType' object is not iterable

Please if you can help. Thanks

Hi @AastaLLL

Am I facing the issue because I am using JetPack 4.6 and TensorRt 8.0. ?

Hi,

For the workspace issue, please give JetPack 5.1 a try.
For the plugin error, you can add the plugin to the source code to see if it can work.

...
TRT_LOGGER = trt.Logger(trt.Logger.INFO)
trt.init_libnvinfer_plugins(TRT_LOGGER, namespace="")
...

Thanks.

I will try it with JetPack5.1
So for now we can close this thread.

Thanks.
Please file a new topic if you meet an error on JetPack 5.1.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.