Define a Plugin Layer and create CUDA engine

Hello everyone, I just want to implement a plugin layer and parse my caffe model into TensorRT. What am Using is this codes:

// parse the caffe model to populate the network, then set the outputs
    INetworkDefinition* network = builder->createNetwork();
    ICaffeParser* parser = createCaffeParser();

    cout << deployFile << endl;
    std::cout << "Begin parsing model..." << std::endl;
    const IBlobNameToTensor* blobNameToTensor = parser->parse(deployFile.c_str(),
    std::cout << "End parsing model..." << std::endl;
    // specify which tensors are outputs
    for (auto& s : outputs)

    // Build the engine
    builder->setMaxWorkspaceSize(10 << 20);	// we need about 6MB of scratch space for the plugin layer for batch size 5

    std::cout << "Begin building engine..." << std::endl;
    ICudaEngine* engine = builder->buildCudaEngine(*network);
    std::cout << "End building engine..." << std::endl;

The program comes to “Begin building engine” and then core dumped. I don’t know the detail of implement a plugin, I try to logging some message in plugin but no luck. What exactly should I debug the create cuda engine?


To help debug this issue, could you output and share more TensorRT log information with us?

class Logger : public ILogger
        void log(Severity severity, const char* msg) override
                        std::cout << msg << std::endl;
} gLogger;
IBuilder* builder = createInferBuilder(gLogger);

Check /usr/src/tensorrt/samples/sampleGoogleNet/sampleGoogleNet.cpp for details.

More, we have lots of samples to demonstrate plugin API:
Native sample:

TX2 sample: