Myelin memory budget exceeded while building TensorRT engine with batch > 1


Hi, I am trying to convert a onnx model to trt engine. My build script is as follows:

trtexec --fp16 --explicitBatch \
    --workspace=2048 \
    --onnx="model.onnx" --saveEngine="model.trt" \
    --minShapes=\'input\':1x1x64x16 \
    --optShapes=\'input\':1x1x64x320 \

I am facing the following error while building the engine:

[11/26/2020-16:27:24] [E] [TRT] ../builder/myelin/codeGenerator.cpp (338) - Myelin Error in compileGraph: 69 (myelinExceededMemBudget : Exceeded mem budget of 4294967296. Need 5338390656

I am unable to find any relevant info about this library (myelin) to be able to figure this out. I’m wondering if there is any way to increase the maximum memory limit here? This error only comes up while building the engine with batch size > 1 (as i have added maxShapes = 2x1x64x3200). I have tested it with batch size 1 and it works well.

The onnx model that I want to convert is exported form pytorch with the following configuration:

x = torch.ones((2,1,64,640), dtype=torch.float)
        dynamic_axes={'input': {0: 'batch', 3: 'width'}})  # batch and width can be dynamic

Additionally I have attached the model files and error log below.


TensorRT Version:
GPU Type: TITAN X (Pascal)
Nvidia Driver Version: 455
CUDA Version: 10.2
CUDNN Version: 7.6.5
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

log.txt (431.7 KB) model.txt (1.7 KB) modules.txt (9.5 KB)

Any help would be appreciated. Thanks.

Hi @saifullah3396,
Can you try setting up setMaxWorkspaceSize ?
Also, if the issue persist, Can you share the model in onnx format


Hi @AakankshaS, thanks for your response. I have tried setting the max batch to upto 6GB but I am getting the same error. One thing that is important to mention here is that I don’t get these errors and am able to build the model even if I build the engine with batch size of 10 like this (notice the batch size is kept constant here):

trtexec --fp16 --explicitBatch \
    --workspace=2048 \
    --onnx="model.onnx" --saveEngine="model.trt" \
    --minShapes=\'input\':10x1x64x16 \
    --optShapes=\'input\':10x1x64x320 \

Since my model has two dynamic dimensions, that is, the batch (going from 1 to N) and the width (going from 16 to 3200), is it that I am specifying it the wrong way? There is not much info about it in the TensorRT documentation so I tried it this way thinking it would work. Also, does this mean it could work if I create a separate min/opt/max profile for each batch like this:

Profile 0:
--minShapes=\'input\':1x1x64x16 \
--optShapes=\'input\':1x1x64x320 \

Profile 1:
--minShapes=\'input\':2x1x64x16 \
--optShapes=\'input\':2x1x64x320 \

Profile N:
--minShapes=\'input\':Nx1x64x16 \
--optShapes=\'input\':Nx1x64x320 \

Hi @saifullah3396,
Kindly refer to the below link for the same.