5.0 Plugin: getWorkspaceSize unexpectly be called before initialize.

Provide details on the platforms you are using:
Linux distro and version: Ubuntu 16.04
GPU type: GTX1080 deskop
nvidia driver version: 410.57
CUDA version: 9.0.176
CUDNN version: 7.3
Python version [if using python]: N/A
Tensorflow version: N/A
TensorRT version: 5.0.0.10 RC
If Jetson, OS, hw versions: N/A

Describe the problem

I am implementing a custom layer to replace conv layers in mnist example. I need to setup cudnn tensors in initialize() function and set workspace size after configureWithFormat called.
Problem:
There is a high chance that getWorkspaceSize function is called before initialize, which cause getWorkspaceSize return a wrong number and cause wrong workspace allocation.

Files

[url]https://gist.github.com/traveller59/3e54e1c42c95c2635382454b4986f71e[/url] contains a plugin file.

Hello,

getWorkspace size is called by the builder and used to compute the required scratch size by the plugin layer. In this case looks like the workspace size is dependent on the handles set during the initialize() call.

In your case, you may want to try making getWorkspaceSize return 0? By setting it to 0, the builder will use the workspace size set by builder->setMaxWorkspaceSize(). So as long as it is large enough the plugin should run.

Thanks for reply.
I want to return mWorkspaceSize which is set during initialize. Is it possible? the input/output size seems be provided in configureWithFormat, I need to calculate workspace size after getting input size.

If I return 0 in getWorkspaceSize, the cudnnConvolutionForward will return CUDNN_STATUS_BAD_PARAM (workspace ptr passed to enqueue is nullptr). The code is tested in samplePlugin.

Hello,

One way is the plugin can create the tensor descriptors in or after configureWithFormat, and then check the workspace for cuDNN in getWorkspaceSize. It is not ideal, because you have to create a handle and then destroy it, but should work.