TX2 fp16 Error : OutOfMemory error in buildsingleLayer

My model in fp32 format runs ok. However, when I use fp16 format, I meet the error “OutOfMemory error in buildsingleLayer,… ,try increasing the workspace size with IBuilder::setMaxWorkspaceSize()”.
I set max workspacesize using builder->setMaxWorkspaceSize(16<<20), and my batchsize is 1.


This most common cause of this error comes from the incomplete FP16 declaration.

To turn on FP16 mode, you need to indicate it on both parser and builder.
Parser : https://github.com/dusty-nv/jetson-inference/blob/master/tensorNet.cpp#L127
Builder: https://github.com/dusty-nv/jetson-inference/blob/master/tensorNet.cpp#L163

Please make sure the configuration is correct first.

I make sure the configuration is correct . My code is :

const IBlobNameToTensor* blobNameToTensor = parser->parse(locateFile(deployFile).c_str(), locateFile(modelFile).c_str(), *network,DataType::kHALF);

Is there any other reason for my mistake? Thank you very much!


To give a further suggestion, please share your implementation with us?

@AastaLLL I have already sent the code to you by sending private message .
Thank you very much.


Plugin API is float mode only. We are checking the probability of plugin fp16 support but no concrete schedule yet.

Could you try these test to narrow down the error?
1. Run your model without plugin API
2. Run your complete model with float mode