Tensor RT supports caffe model layers

Hi xushen,

There is no clear plan for supporting SSD with TensorRT.
But TensorRT2.1 has custom API(IPlugin) for the user to implement their layers and bind it into inferencing flow.

Please check FCPlugin sample for more details.

Hi, I got a problem, I implemented my custom layer following the sampleFasterRCNN sample and official tutorials. But when I execute the program, I got an error: Could not parse layer type “Permute”, and permute_param. I’m still confused about how is the program able to parse unsupported layers and determine whether to use custom plugins or supported layers. Could you please share the faster_rcnn_test_iplugin.prototxt for a reference?


FasterRCNN prototxt is located at:

layer {
   bottom: "rpn_cls_score"
   top: "rpn_cls_score_reshape"
   name: "ReshapeCTo2"            <b><- string parsed by plugin API</b>
   type: "IPlugin"

Thanks very much, it works!
But another issue is that when I try to create a cudaEngine based on the network with plugin layer, the buildCudaEngine() function returns “resources.cpp (57) - Cuda Error in gieCudaMalloc: 2”, the wired thing is that this error raises even before configure() and getWorkspaceSize() of my plugin layer are called, any suggestions ?


May I know the batch-size you used? Guess this issue is related to memory.


Yes, it’s the batch size issue, it works!


We have written a face-recognition sample to demonstrate TensorRT2.1 Plugin API.
Please check this GitHub for more details:


I have a quick question.
In the TensorRT samples, the input size in faster_rcnn_test_iplugin.prototxt is 375 x 500.

However, all .prototxt file on https://github.com/rbgirshick/py-faster-rcnn are 224 x 224.

How did you generate faster_rcnn_test_iplugin.prototxt? and how did you train the Faster RCNN Caffe model?