Does cudnnSetRNNDescriptor_v6 support datatype CUDNN_DATA_INT8?

I don’t see any restriction in the docs here: https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html#cudnnSetRNNDescriptor_v6

For the dataType argument it says:

dataType
  Input. Compute precision.

I’m guessing it does not support CUDNN_DATA_INT8 because when I modified the RNN example (RNN_example.cu) that comes with CuDNN 7.2 so that it has:

  cudnnErrCheck(cudnnSetRNNDescriptor_v6(cudnnHandle,
                                   rnnDesc,
                                   hiddenSize, 
                                   numLayers, 
                                   dropoutDesc,
                                   CUDNN_LINEAR_INPUT, // We can also skip the input matrix transformation
                                   bidirectional ? CUDNN_BIDIRECTIONAL : CUDNN_UNIDIRECTIONAL, 
                                   RNNMode, 
                                   RNNAlgo, // Can be changed to use persistent RNNs on Pascal+ GPUs.
                                   CUDNN_DATA_INT8)); // this is line 285

When I compile and run I get:

 $ ./RNN_example_int8 100 4 512 64 2
   cuDNN Error: CUDNN_STATUS_NOT_SUPPORTED RNN_example_int8.cu 285

So I’m assuming based on this not_supported status that it isn’t supported even though the docs don’t stipulate a restriction. Can anyone at Nvidia confirm? If it’s not supported can the API docs be updated to reflect that?

Also, will CUDNN_DATA_INT8 be supported for this model in the future?

Thanks.

Phil
(wishing I could just take a peek at the CuDNN source to see if this is indeed the case, but alas, it’s not open source)