how weights are passed to plugin layer -- SampleFasterRCNN

Hi

In the SampleFasterRCNN example provided with TensorRt3, the plugins created for non-supported layers do not seem to inherit Weights from .caffemodel file

My questions are

  1. How to pass weights to them ??
  2. how to pass weights to RPROIFused layer specifically using createFasterRCNNPlugin function ??

Hi,

We have another sample to demonstrate weight value handling.
Please check ‘/usr/src/tensorrt/samples/samplePlugin/’.

1) Please check the constructor of FCPlugin class in samplePlugin:

FCPlugin(const Weights *weights, int nbWeights, int nbOutputChannels): mNbOutputChannels(nbOutputChannels)
...

2) Please check our plugin document here:

INvPlugin * nvinfer1::plugin::createFasterRCNNPlugin (int featureStride, int preNmsTop, int nmsMaxOut, float iouThreshold, float minBoxSize, float spatialScale, DimsHW pooling, Weights anchorRatios, Weights anchorScales)

Thanks.

thanks @AastaLLL

Hi,I want to implement depthwise seperable convolution in tensorrt2.1,I just follow the samplePlugin, and the nbWeights parameter is 0,and no weight is passed in the createPlugin Function。
my code as follow:

nvinfer1::IPlugin* PluginFactory::createPlugin(const char* layerName, const nvinfer1::Weights* weights, int nbWeights){
    assert(isPlugin(layerName));
    LayerType layerType = getLayerType(layerName);
    switch (layerType) {
        case DEPTHWISELAYER_S1:
            cout <<"layer : "<<layerName<<" weight num "<<nbWeights<<endl;
            assert(std::find_if(depLayers.begin(), depLayers.end(), [&](const DepthwiseLayer& r) {return r.first == layerName;}) == depLayers.end());
            dLayer = std::unique_ptr<DepthwiseConvLayer>(new DepthwiseConvLayer(weights, nbWeights, 1));
            depLayers.push_back(std::make_pair(layerName, std::move(dLayer))); 
            return dLayer.get();
        case DEPTHWISELAYER_S2:
            cout <<"layer : "<<layerName<<" weight num "<<nbWeights<<" count :"<<weights[0].count<<endl;
            assert(std::find_if(depLayers.begin(), depLayers.end(), [&](const DepthwiseLayer& r) {return r.first == layerName;}) == depLayers.end());
            dLayer = std::unique_ptr<DepthwiseConvLayer>(new DepthwiseConvLayer(weights, nbWeights, 2));
            depLayers.push_back(std::make_pair(layerName, std::move(dLayer))); 
            return dLayer.get();
        default:
            printf("Not supportted layer\n");
            assert(0);
    }
}
bool PluginFactory::isPlugin(const char *name){
    return (!strcmp(name, "preprocess"))
        || (!strcmp(name, "postprocess"))
        || (!strcmp(name, "reorg"))
        || (strstr(name, "depthwise_s1"))
        || (strstr(name, "depthwise_s2"))
        || (!strcmp(name, "region_output"));
}
IPlugin* PluginFactory::createPlugin(const char* layerName, const void *serialData, size_t serialLength) {
    assert(isPlugin(layerName));
    LayerType layerType = getLayerType(layerName);
    switch (layerType) {
        case PREPROCESSLAYER:
            assert(preprocessLayer.get() == nullptr);
            preprocessLayer = std::unique_ptr<PreprocessLayer>(new PreprocessLayer());
            return preprocessLayer.get();
            break;
        case POSTPROCESSLAYER:
            assert(postProcessLayer.get() == nullptr);
            postProcessLayer = std::unique_ptr<PostProcessLayer>(new PostProcessLayer());
            return postProcessLayer.get();
            break;
        case REORGLAYER:
            assert(reorgLayer.get() == nullptr);
            reorgLayer = std::unique_ptr<ReorgLayer>(new ReorgLayer());
            return reorgLayer.get();
            break;
        case DEPTHWISELAYER_S1:
        case DEPTHWISELAYER_S2:
            assert(std::find_if(depLayers.begin(), depLayers.end(), [&](const DepthwiseLayer& r) {return r.first == layerName;}) == depLayers.end());
            dLayer = std::unique_ptr<DepthwiseConvLayer>(new DepthwiseConvLayer(serialData, serialLength));
            depLayers.push_back(std::make_pair(layerName, std::move(dLayer))); 
            return dLayer.get();
            break;
        case OUTPUTLAYER:
            assert(outputLayer.get() == nullptr);
            outputLayer = std::unique_ptr<OutputLayer>(new OutputLayer());
            return outputLayer.get();
        default:
            printf("Not supportted layer\n");
            assert(0);
    }
}

the layer configuration in prototxt is

layer {
  name: "conv2_1/depthwise_s1"
  type: "Convolution"
  bottom: "conv1"
  top: "conv2_1/dw"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  convolution_param {
    num_output: 32
    #num_output: 1
    bias_term: false
    pad: 1
    kernel_size: 3
    group: 32
    #engine: CAFFE
    stride: 1
    weight_filler {
      type: "msra"
    }
  }
}

Hi, 1325956970

It’s recommended to create a new topic for your problem.
This will help other users to find the similar information.

Thanks.

solved it.
sorry,It’s because i edited the layer’s name.

Good.

Thanks for the update.