Error loading custom model using imagenet-console from jetson-inference

Hi,

I’m trying to load a custom model using imagenet-console from jetson-inference. (TensorRT version 2.1.2, tx1)

I got the following error:

[GIE] Internal error: could not find any implementation for node conv2_1/dw + relu2_1/dw, try increasing the workspace size with IBuilder::setMaxWorkSpace()
[GIE] cudnnBuilder2.cpp (586) - OutOfMemory Error in buildSingleLayer

I tried increasing the workspace with builder->setMaxWorkspaceSize(16 << 24), but that didn’t solve the problem. Help would be greatly appreciated.

Hi,

Here is a relevant topic:
[url]https://devtalk.nvidia.com/default/topic/1026847/jetson-tx2/tensorrt-3-0-deconvolution-layer-not-working-in-tx2/[/url]

Could you take a look and check if it also fix your issue?

Thanks and please let us know the result.

Unfortunately, it did not fix the issue.

However, I have found that it works if I disable FP16. Why would that be the case?

Hi,

Could you try if this issue also occurs on TensorRT 3.0?
Thanks.

That fixed the issue, thanks.