How to set workspace in Tensorrt Python API when converting from onnx to engine model

I got warning when converting from .onnx to engine

[06/13/2023-07:58:32] [TRT] [I] Some tactics do not have sufficient workspace memory to run. Increasing workspace size will enable more tactics, please check verbose output for requested sizes.

And this is my source code

    parser.add_argument("-w", "--workspace", default=1, type=int, help="The max memory workspace size to allow in Gb, "
                                                                       "default: 1")
        self.builder = trt.Builder(self.trt_logger)
        self.config = self.builder.create_builder_config()
        self.config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace * (2 ** 30))
        # self.config.max_workspace_size = workspace * (2 ** 30)  # Deprecation

It seems that I had set maxixum workspace.
Am I wrong somewhere when setting workspace? Thanks.

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

@AakankshaS Thanks. Please focus on my question. But my questions is about how to set workspace in Tensorrt Python API. And I want to confirm that, my above setting is correct or not?

@spolisetty Please help me with your guide. Thank you so much.


Please refer to the following Python API document, which will help you.

Thank you.

1 Like

Thanks for quick response.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.