How does REGISTRY work?

I have a caffe prototxt include leakyReLU

layer {
  name: "relu1"
  type: "LReLU"
  bottom: "ip1"
  top: "ip1"
}

So I use PluginCreator to implement my custom leakyReLU, then

REGISTER_TENSORRT_PLUGIN(LReluPluginCreator);

registry the creator.

But I get the error while parse the caffe prototxt:

could not parse layer type LReLU

then I debug if I indeed registry the creator:

int n = 0;
    auto creator_list = getPluginRegistry()->getPluginCreatorList(&n);
    for (int i = 0; i < n; ++i){
        std::cout << creator_list[i]->getPluginName() << std::endl;
        std::cout << creator_list[i]->getPluginVersion() << std::endl;
        std::cout << creator_list[i]->getPluginNamespace() << std::endl;
    }
RnRes2Br2bBr2c_TRT
1

RnRes2Br1Br2c_TRT
1

CgPersistentLSTMPlugin_TRT
1

SingleStepLSTMPlugin
1

LRelu_TRT
1

It seem indeed registry the LRelu_TRT.

So how can I solve the problem?

Hi,

To use TensorRT registered plugins in your application, the libnvinfer_plugin.so library must be loaded and all plugins must be registered. This can be done by calling initLibNvInferPlugins(void* logger, const char* libNamespace)() in your application code.

Could you please check if libnvinfer_plugin.so library is loaded in your application?

Also, if possible could you please share the script and model file so we can help better?
Also, can you provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version

Thanks