Is the INFO `[I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.` critical?

When executing TensorRT, the following INFO is displayed.

[I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.

We use Tesla T4 GPU, so the available GPU memory is 15109 MiB = 15870798319.3277301788 byte.

Out of memory when max_workspace_size = 15870700000 is set.

It works when max_workspace_size = 15000000000 is set, but the above INFO is output.

The INFO is also found in the TensorRT README.

https://github.com/NVIDIA/TensorRT#install-the-tensorrt-oss-components-optional

[08/23/2019-22:08:59] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.

The following NVIDIA blog also says that you should allocate enough.

“Set the Maximum Workspace Sizes” chapter

https://devblogs.nvidia.com/speed-up-inference-tensorrt/#h.97pc4btg9tu3

Question

Is the INFO a critical issue? Should it be resolved or left alone?

Our operating environment

  • GPU : Tesla T4
  • Host OS : Ubuntu 16.04.6 LTS
  • TensorRT : 6.0.1
  • NVIDIA Driver : 430.26
  • CUDA : 10.1

@inazuka.daiki Did you ever find a solution to your problem? I seem to be encountering this as well.

@solarflarefx Thanks for your comment. But unfortunately, I haven’t solved this problem yet.

I’ve also encountered the issue, i’m running using a docker, changed the ‘workspace_size’ in the model config file, but doesn’t seem to work. is there any solution for this issue?

Hi,
Please refer to the installation steps from the below link if in case you are missing on anything

Also, we suggest you to use TRT NGC containers to avoid any system dependency related issues.

Thanks!