restriction on RPROIIPlugin's input fails after I modify fasterrcnn network.

I add a deconvolution layer before one input of roiproposal plugin in samplefastercnn.
layer {
name: “convf_up2”
type: “Deconvolution”
bottom: “convf”
top: “convf_up2”
param {
lr_mult: 0.0
decay_mult: 0.0
}
convolution_param {
num_output: 256
bias_term: false
pad: 1
kernel_size: 4
group: 256
stride: 2
weight_filler {
type: “bilinear”
}

but I got one error :RPROIPluginsampleFasterRCNN: NvPluginFasterRCNN.cu:83: virtual void nvinfer1::plugin::RPROIPlugin::configure(const nvinfer1::Dims*, int, const nvinfer1::Dims*, int, int): Assertion `inputDims[0].d[1] == inputDims[1].d[1] && inputDims[0].d[1] == inputDims[2].d[1]’ failed.
[1] 7214 abort (core dumped) ./bazel-bin/sampleFasterRCNN/sampleFasterRCNN/sampleFasterRCNN

have any one met this problem before ? could Nvidia open source the implementation of proposalroi plugin?

Hello, can you provide details on the platforms you are using?

Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version

can you describe how you are adding the plugin? (api used)