How to use setMaxWorkspaceSize() for Python code


I want to use this reference.

And I’ve got error this command

root@tx2:/home/test/TensorRT-For-YOLO-Series# python3 -o yolov8s.onnx -e yolov8s.trt --end2end --v8
Namespace(calib_batch_size=8, calib_cache=‘./calibration.cache’, calib_input=None, calib_num_images=5000, conf_thres=0.4, end2end=True, engine=‘yolov8s.trt’, iou_thres=0.5, max_det=100, onnx=‘yolov8s.onnx’, precision=‘fp16’, v8=True, verbose=False, workspace=1)
[03/04/2024-07:37:06] [TRT] [I] [MemUsageChange] Init CUDA: CPU +262, GPU +0, now: CPU 292, GPU 5118 (MiB)
[03/04/2024-07:37:06] [TRT] [I] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 292 MiB, GPU 5118 MiB
[03/04/2024-07:37:06] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 321 MiB, GPU 5147 MiB
Traceback (most recent call last):
File “”, line 308, in
File “”, line 266, in main
builder = EngineBuilder(args.verbose, args.workspace)
File “”, line 109, in init
self.config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace * (2 ** 30))
AttributeError: ‘tensorrt.tensorrt.IBuilderConfig’ object has no attribute ‘set_memory_pool_limit’

I found the reason as below.
We are using TensorRT 8.2.1.
So, We need to use setMaxWorkspaceSize() for our usecase.

Is there anyway to set workspace size on Python Code( with TensorRT 8.2.1 ??
I couldn’t find Python API for setMaxWorkspaceSize().
I can see only C API. So, Do I need to build C API ?
If it needs, how can I do it?


You can set the variable through IBuilderConfig:


I’ve got the other sample and it doesn’t have issue.
They are different to process about worksize.
Maybe, the other is old version sample code.
Now, I’m trying to use the other sample.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.