How to read layer param. with TensorRT Plugin from .prototxt

If there is a layer in caffe not implemented by TensorRT, we need implement it by our own with TensorRT plugin.

template <typename Dtype>
class Layer {
...
 protected:
  /** The protobuf that stores the layer parameters */
  LayerParameter layer_param_;
...
}

I think caffe layer can read parameters from .prototxt from layer_param_ member.
1. How does TensorRT Plugin read layer parameters?
caffe layer has

template <typename Dtype>
class Layer {
...
   * This method should do one-time layer specific setup. This includes reading
   * and processing relevent parameters from the <code>layer_param_</code>.
   * Setting up the shapes of top blobs and internal buffers should be done in
   * <code>Reshape</code>, which will be called before the forward pass to
   * adjust the top blob sizes.
   */
  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {}
...
}

so each layer can initialize it’s content and setup top blob dimension, but
2. Why TensorRT plugin using class template parameter to declare each plugin in factory?

template<<b>int OutC</b>>
class Reshape : public IPlugin
{
    Reshape() {}
    Reshape(const void* buffer, size_t size)
    {
        assert(size == sizeof(mCopySize));
        mCopySize = *reinterpret_cast<const size_t*>(buffer);
    }
}
// integration for serialization
class PluginFactory : public nvinfer1::IPluginFactory, public nvcaffeparser1::IPluginFactory
{
    // deserialization plugin implementation
    virtual nvinfer1::IPlugin* createPlugin(const char* layerName, const nvinfer1::Weights* <b>weights</b>, int nbWeights) override
    IPlugin* createPlugin(const char* layerName, const void* <b>serialData</b>, size_t serialLength) override

    std::unique_ptr<Reshape<<b>2</b>>> mPluginRshp2{ nullptr };
    std::unique_ptr<Reshape<<b>18</b>>> mPluginRshp18{ nullptr };
}

since same PluginFactory class used by nvcaffeparser and nvinfer, there are corresponding construction for Reshape plugin,
3. What’s the difference between weights in nvcaffeparser phase and serialData in nvinfer phase?

There is a PLAN file description in TensorRT User Guide of 2.2. Workflow Diagrams section.
4. Is there anybody used or anyway to use the PLAN file?

Hi,

1. Plugin API doesn’t support parameter parsing. Please read it directly in the layer’s constructor.

2. In case you need to support different input type.

3. weight is used in inferencing while serialData targets for deserializing.

4. PLAN means serialized engine.
It’s recommended to re-create TensorRT engine via the PLAN file to save the initial building time.
Check this sample for more information:
[url]https://github.com/dusty-nv/jetson-inference/blob/master/tensorNet.cpp#L252[/url]

Thanks.

It seems we can load PLAN file to gieModelStream.
Thanks.