RPROI Failed for different parameters

I am getting the below error when I create a RPROI plugin with the following parameters:

constexpr int anchorsRatioCount = 3;
constexpr int anchorsScaleCount = 3;
constexpr int poolingH = 7;
constexpr int poolingW = 7;
constexpr int nmsMaxOut = 50;
constexpr float iouThreshold = 0.7f;
const float anchorsRatios[anchorsRatioCount] = { 0.5f, 1.0f, 2.0f };
const float anchorsScales[anchorsScaleCount] = { 1.0f, 4.0f, 8.0f };
constexpr int featureStride = 16;
constexpr int preNmsTop = 1024;
constexpr float minBoxSize = 1;
constexpr float spatialScale = 0.0625f;

NvPluginFasterRCNN.cu:170: virtual int nvinfer1::plugin::RPROIPlugin::enqueue(int, const void* const*, void**, void*, cudaStream_t): Assertion `status == STATUS_SUCCESS’ failed.
Aborted

Is it a valid set of Input for this plugin?

Hello, can you provide details on the platforms you are using?

Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version

Please include the steps/files used to reproduce the problem along with the output of infer_device.

Linux Distro - L4T 28.2
GPU type - Tegra TX1
CUDA - 9.0
CUDNN - 7.1.5
TENSORRT - 4.1.3
Not using python and tensorflow

I used the sampleFasterRCNN sample code and modified it to our requirements. I just wanted to know, if the input blob sizes are all correct, then are the above mentioned parameters valid for the RPROI Plugin?

have you solved this issue??