Error loading custom model using imagenet-console from jetson-inference


I’m trying to load a custom model using imagenet-console from jetson-inference. (TensorRT version 2.1.2, tx1)

I got the following error:

[GIE] Internal error: could not find any implementation for node conv2_1/dw + relu2_1/dw, try increasing the workspace size with IBuilder::setMaxWorkSpace()
[GIE] cudnnBuilder2.cpp (586) - OutOfMemory Error in buildSingleLayer

I tried increasing the workspace with builder->setMaxWorkspaceSize(16 << 24), but that didn’t solve the problem. Help would be greatly appreciated.


Here is a relevant topic:

Could you take a look and check if it also fix your issue?

Thanks and please let us know the result.

Unfortunately, it did not fix the issue.

However, I have found that it works if I disable FP16. Why would that be the case?


Could you try if this issue also occurs on TensorRT 3.0?

That fixed the issue, thanks.