Description
I have a model which I want to optimize using trtexec. When I use batch size 2, it can optimize normally. However, when I use batch size 16, out of memory error happens.
Try decreasing the workspace size with IBuilderConfig::setMemoryPoolLimit().
[02/15/2023-06:20:02] [E] Error[2]: [virtualMemoryBuffer.cpp::resizePhysical::145] Error Code 2: OutOfMemory (no
further information)
[02/15/2023-06:20:02] [E] Error[2]: [virtualMemoryBuffer.cpp::resizePhysical::145] Error Code 2: OutOfMemory (no
further information)
[02/15/2023-06:20:02] [W] [TRT] Requested amount of GPU memory (2147483648 bytes) could not be allocated. There $
ay not be enough free memory for allocation to succeed.
[02/15/2023-06:20:02] [W] [TRT] Skipping tactic 2 due to insufficient memory on requested size of 2147483648 det$
cted for tactic 0x0000000000000000.
Try decreasing the workspace size with IBuilderConfig::setMemoryPoolLimit().
[02/15/2023-06:20:08] [E] Error[1]: [convolutionRunner.cpp::executeConv::465] Error Code 1: Cudnn (CUDNN_STATUS_$
LLOC_FAILED)
[02/15/2023-06:20:08] [E] Error[2]: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Ass$
rtion engine != nullptr failed. )
[02/15/2023-06:20:08] [E] Engine could not be created from network
[02/15/2023-06:20:08] [E] Building engine failed
[02/15/2023-06:20:08] [E] Failed to create engine from model or file.
[02/15/2023-06:20:08] [E] Engine set up failed
Except chaning a GPU with larger RAM, what can I do to deal with this condition?
Environment
nvidia docker container 22.12