How to use addPlugin( ) in tensorRT2.1?

I am having trouble using INetwork::addPlugin(). I want to add a Reshape plugin when building faster-rcnn through API rather than caffe parser:

template<int OutC>
class Reshape : public IPlugin
{
public:
    Reshape() {}
    Reshape(const void* buffer, size_t size)
    {
        assert(size == sizeof(mCopySize));
        mCopySize = *reinterpret_cast<const size_t*>(buffer);
    }

    int getNbOutputs() const override
    {
        return 1;
    }
    Dims getOutputDimensions(int index, const Dims* inputs, int nbInputDims) override
    {
        assert(nbInputDims == 1);
        assert(index == 0);
        assert(inputs[index].nbDims == 3);
        assert((inputs[0].d[0])*(inputs[0].d[1]) % OutC == 0);
        return DimsCHW(OutC, inputs[0].d[0] * inputs[0].d[1] / OutC, inputs[0].d[2]);
    }

    int initialize() override
    {
        return 0;
    }

    void terminate() override
    {
    }

    size_t getWorkspaceSize(int) const override
    {
        return 0;
    }

    // currently it is not possible for a plugin to execute "in place". Therefore we memcpy the data from the input to the output buffer
    int enqueue(int batchSize, const void*const *inputs, void** outputs, void*, cudaStream_t stream) override
    {
        CHECK(cudaMemcpyAsync(outputs[0], inputs[0], mCopySize * batchSize, cudaMemcpyDeviceToDevice, stream));
        return 0;
    }

    size_t getSerializationSize() override
    {
        return sizeof(mCopySize);
    }

    void serialize(void* buffer) override
    {
        *reinterpret_cast<size_t*>(buffer) = mCopySize;
    }

    void configure(const Dims*inputs, int nbInputs, const Dims* outputs, int nbOutputs, int)    override
    {
        mCopySize = inputs[0].d[0] * inputs[0].d[1] * inputs[0].d[2] * sizeof(float);
        std::cout << sizeof(mCopySize) << std::endl;
    }

protected:
    size_t mCopySize;
};

ICudaEngine *
createEngine(unsigned int maxBatchSize, IBuilder *builder, DataType dt)
{
    INetworkDefinition* network = builder->createNetwork();

    std::map<std::string, Weights> weightMap = loadWeights(locateFile("rcnn.wts"));

    //  Create input of shape { 1, 3, 288, 512 } with name referenced by INPUT_BLOB_NAME
    auto data = network->addInput(INPUT_BLOB_NAME, dt, DimsCHW{ INPUT_C, INPUT_H, INPUT_W});
    assert(data != nullptr);

    // group 1
    auto conv1_1 = network->addConvolution(*data, 32, DimsHW{3, 3}, weightMap["conv1_1_weight"], weightMap["conv1_1_bias"]);
    assert(conv1_1 != nullptr);
    conv1_1->setPadding(DimsHW{1, 1});
    conv1_1->setStride(DimsHW{1, 1});

    // and more layers

    // ROI Proposal
    Reshape<2> PluginRshp2;
    auto rpn_cls_score_reshape = network->addPlugin(reinterpret_cast<ITensor* const*>(rpn_cls_score->getOutput(0)), 1, *reinterpret_cast<IPlugin*>(&PluginRshp2));
    assert(rpn_cls_score_reshape != nullptr);
}

It generates error of segfault. Could anyone help?

The user guide says there are three ways to add the plugin into a network:

  1. Use the INetwork::addPlugin() method when defining the network.
  2. Create the network via a parser.
  3. De-serialize the network after it has been built.

Now both samplePlugin and sampleFasterRCNN use plugin via caffe parser. It would be great to have examples of using plugin via API, i.e addPlugin( ). Thanks!

Hi,

Here is another plugin sample, but doesn’t use addPlugin() API.

Usually, there are two possible flows:

  1. Caffe model → TensorRT model → inference
    [url]Face-Recognition/tensorNet.cpp at master · AastaNV/Face-Recognition · GitHub

  2. TensorRT model → inference
    [url]Face-Recognition/tensorNet.cpp at master · AastaNV/Face-Recognition · GitHub

addPlugin is for someone want to defined layers via TensorRT directly.
From your source code, do you allocate the input buffer somewhere?

Or could you paste the complete source for us debugging?

Thanks!

Since I am converting a model from MXNet to tensorRT, so the addPlugin() API is preferred.

The document says different methods of the plugin class are called during creating the network, building, runtime, serialization respectively. I am confused how to pass the input buffer into the plugin.

Anyway, example on this would be great helpful!

Hi,

Please check this sample:
/usr/src/tensorrt/samples/sampleCharRNN

Thanks.

I tried the sampleCharRNN. It generates error:

[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffcea4e700 (LWP 24445)]
Original: 4 layers
After dead-layer removal: 4 layers
After scale fusion: 4 layers
After conv-act fusion: 4 layers
After tensor merging: 4 layers
After concat removal: 4 layers
Region RNN output: NCHW_F32
Region (Unnamed ITensor* 6): NCHW_F32
Region FC output: NCHW_F32
Region data: NCHW_F32
Region hiddenIn: NCHW_F32
Region cellIn: NCHW_F32
Region RNN output: NCHW_F32
Region hiddenOut: NCHW_F32
Region cellOut: NCHW_F32
Region (Unnamed ITensor* 6): NCHW_F32
Region FC output: NCHW_F32
Region prob: NCHW_F32

Node (Unnamed Layer* 0): NCHW_F32
Node reshape: NCHW_F32
Node (Unnamed Layer* 2): NCHW_F32
Node (Unnamed Layer* 3): NCHW_F32

After reformat layers: 4 layers
Block size 33554432
Block size 2048
Block size 2048
Total Activation Memory: 33558528
[New Thread 0x7fffcd24c700 (LWP 24446)]
[New Thread 0x7fffcca4b700 (LWP 24447)]

--------------- Timing (Unnamed Layer* 0)(13)
cudnnRNNLayer.cpp (114) - Cuda Error in allocateResources: 3
sample_char_rnn_debug: sampleCharRNN.cpp:367: void APIToModel(std::map<std::__cxx11::basic_string<char>, nvinfer1::Weights>&, nvinfer1::IHostMemory**): Assertion `engine != nullptr' failed.

Thread 1 "sample_char_rnn" received signal SIGABRT, Aborted.
0x00007fffe4083428 in __GI_raise (sig=sig@entry=6)
    at ../sysdeps/unix/sysv/linux/raise.c:54
54	../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb)

The system is ubuntu 16.04, CUDA 8.0

Could you help have a look? Thanks!

Hi,

We can’t reproduce this issue.
Do you use TX2 with JetPack3.1?

No, I am using GTX 1080.

Hi,

This is Jetson forum. So we want to confirm something first:

  1. Does your tensorrt package download from this page?
    TensorRT SDK | NVIDIA Developer

  2. Try to run it with sudo.

Thanks.

Hi

I met the same problem when I used the addPlugin() function. Did you solve this problem?

Thanks!

Hi,

Please install TensorRT3.0.

In TensorRT3.0, we have a sampleCharRNN example to demonstrate addPlugin() API.
Thanks.

I saved my PLAN to a file and then de-seialize from the PLAN file for my program, but next the line “nvinfer1::ICudaEngine* engine = runtime->deserializeCudaEngine(modelMen, modelsize, &pluginFactory);” failed!

I guess the custom layer’s parameter is not saved in the PLAN file.can you give me some advise?

Is there any some demo for the PLAN file and contain custom layer ?

[url]https://devtalk.nvidia.com/default/topic/1036202/how-to-solve-quot-buildcudaengine-quot-cost-long-time/#5264212[/url]