TensorRT Plugin and custom layers

Hi,
I am currently working on using TensorRT for a modified version of Faster RCNN with a different ROI pooling layer. Because of this different ROI pooling layer, I need to make my own plugin and cannot use the one provided. However, I am really stuck on how to approach this.

Looking at the Faster RCNN sample, it appears that the RCNN plugin is created using a call to “createFasterRCNNPlugin”, which is nowhere to be found and probably part of pre-compiled files. Would it be possible to have more documentation/samples on how this ROI plugin is actually created?

Specifically, how should one go about “converting” a user-defined layer in Caffe, written in CUDA, to a plugin in TensorRT? Where does the plugin know how the layer is actually implemented and what it actually does? Basically, how does the actual runtime computation work in the enqueue() function? Would it be enough to just copy paste the cuda-written layer into the enqueue() function?

I would really appreciate any additional information on how to actually go back and forth between cuda/caffe user-defined layers and plugins, as this process is still very mysterious to me.

Thanks!

same question here…

could not found out this function anywhere, and also high interested in how to define custom layer…

Also intersted in this function

Hi, have you already solved this problem? I am also stuck on this issue

I solved it. Contact me for more details.

I’ve been trying to understand how to do custom layers in TensorRT today too, and this example (by one of the NVIDIA admins) is much more useful than the bundled plug-in examples I think, as it shows the actual low-level implementation, and how layer types (found in a Caffe model description I think) are matched by name: