[TensorRT] Problem adding custom layer to a network defined using TensorRT API

Hello,
I’m trying to add a custom layer to my architecture defined using the TensorRT API; before implementing more complicated layer I’ve been trying to add to my network the same Reshape layer as the one implemented in the sampleCharRNN sample; this results in a segmentation fault

I add the layer to network this way, and i define it as a output:

Reshape reshape(PREVIOUS_LAYER_NCHANNELS * PREVIOUS_LAYER_H * PREVIOUS_LAYER_W);
ITensor *ptr = previous_layer->getOutput(0);
auto plugin = network->addPlugin(&ptr, 1, reshape);
assert(plugin != nullptr);
plugin->setName("reshape");

plugin->getOutput(0)->setName(OUTPUT_BLOB_NAME);
network->markOutput(*plugin->getOutput(0));

the layer class implementation, and the Plugin factory implementation are as follows (same as the charRNN sample)

// Reshape plugin to feed RNN into FC layer correctly.
class Reshape : public IPlugin
{
public:
	Reshape(size_t size) : mSize(size) {} 
	Reshape(const void*buf, size_t size)
    {
        assert(size == sizeof(mSize));
        mSize = *static_cast<const size_t*>(buf);
    }
	int getNbOutputs() const override													{	return 1;	}
	int initialize() override															{	return 0;	}
	void terminate() override															{}
	size_t getWorkspaceSize(int) const override											{	return 0;	}
	int enqueue(int batchSize, const void*const * inputs, void** outputs, void* workspace, cudaStream_t stream)
    {
        CHECK(cudaMemcpyAsync(static_cast<float*>(outputs[0]),
                   static_cast<const float*>(inputs[0]),
                   sizeof(float) * mSize * batchSize, cudaMemcpyDefault, stream));
        return 0;
    }
	size_t getSerializationSize() override
    {
        return sizeof(mSize);
    }
	void serialize(void* buffer) override
    {
        (*static_cast<size_t*>(buffer)) = mSize;

    }
	void configure(const Dims*, int, const Dims*, int, int)	override					{ }
    // The RNN outputs in {L, N, C}, but FC layer needs {C, 1, 1}, so we can convert RNN
    // output to {L*N, C, 1, 1} and TensorRT will handle the rest.
	Dims getOutputDimensions(int index, const Dims* inputs, int nbInputDims) override
	{
        assert(nbInputDims == 1);
        assert(index == 0);
        assert(inputs[index].nbDims == 3);
		return DimsNCHW(inputs[index].d[1] * inputs[index].d[0], inputs[index].d[2], 1, 1);
	}
    private:
    size_t mSize{0};
};
class PluginFactory : public nvinfer1::IPluginFactory
{
public:
	// deserialization plugin implementation
	IPlugin* createPlugin(const char* layerName, const void* serialData, size_t serialLength) override
	{
        assert(!strncmp(layerName, "reshape", 7));
        if (!mPlugin) mPlugin = new Reshape(serialData, serialLength);
        return mPlugin;
    }
    void destroyPlugin()
    {
        if (mPlugin) delete mPlugin;
        mPlugin = nullptr;
    }
private:
    Reshape *mPlugin{nullptr};
}; // PluginFactory

I manage to create the engine

auto engine = builder->buildCudaEngine(*network);

but as I call the serialize method

(*modelStream) = engine->serialize();

i get a segmentation fault;

without the custom plugin the architecture compiles and works as expected;
unfortunately I cannot inspect the code of the serialize() function to see when exactly it goes wrong;
any clue of what is happening?

thanks,

f

Its suggested that TensorRT questions be posted on the forum dedicated to TensorRT:

https://devtalk.nvidia.com/default/board/303/deep-learning-libraries/

The sampleCharRNN plugin is specific to that use case, there is no guarantee that it works in any other scenario. Have you tried using a plugin from one of the other samples that are shipping?

If that doesn’t solve your issue:
Please file a bug here: https://developer.nvidia.com/nvidia-developer-program
Please include the steps used to reproduce the problem along with the output of infer_device.

Hi,
as suggested above, I posted on the TensorRT dedicated forum


https://devtalk.nvidia.com/default/topic/1032922/tensorrt/problem-adding-custom-tensorrt-layer-to-a-network-defined-using-tensorrt-api/post/5255497/?offset=4#5255634

there I added more details and the code to reproduce the problem

thanks,

f

I see traffic on the other topic. I think your question is getting addressed there.

solved here:

https://devtalk.nvidia.com/default/topic/1032922/tensorrt/problem-adding-custom-tensorrt-layer-to-a-network-defined-using-tensorrt-api/post/5255740/?offset=6#5260060

thanks,

f