RNNV2layer not support int8 Calibration in tensorrt4.0?

My network structure is CNN + Bilstm, when I use FP32 mode, It works well.
But after I prepare calibration datasets,
and set Int8 Mode:
builder->setInt8Mode(true);
builder->setInt8Calibrator(&calibrator);
I get the following error:
…/builder/cudnnBuilder2.cpp:685: virtual std::vector<nvinfer1::query::Portsnvinfer1::query::TensorRequirements > nvinfer1::builder::Node::getSupportedFormats(const nvinfer1::query::Portsnvinfer1::query::AbstractTensor&, const nvinfer1::cudnn::HardwareContext&, nvinfer1::builder::Format::Type, const nvinfer1::builder::FormatTypeHack&) const: Assertion `sf’ failed.

So, I don’t use bilstm layer, only use cnn layers, and do calibration again. finally I get the Calibration table.

I wanna to know,
Tensorrt not support int8 Calibration for RNNV2 layer or I do something wrong?

I encountered same problem.

Have you fixed this problem?