1D Dilated convolution Descriptor setup

If anyone could share some wisdom with me that would be great. I’m coding a 1D timeseries NN with dilated convolutional layers. I can’t seem to find a working set of descriptors for these dilated convolutional layers. I’m worried there’s something I’m missing. Initially I would get a BAD_PARAM thrown from either cudnnGetConvolutionNdForwardOutputDim or cudnnGetConvolutionForwardWorkspaceSize. Currently I receive a STATUS_NOT_SUPPORTED from the cudnnGetConvolutionForwardWorkspaceSize.

This is from my initial(input) layer, which is convolutional, and is setup as follows:

int n = 1;
int c = DATACHANNELS;//31
int h = m_final_data_width;//7471
int w = 1;//unused

const int dataDims = { 1, DATACHANNELS, m_final_data_width };

const int dataStride = { DATACHANNELS * m_final_data_width, m_final_data_width, 1 };

checkCUDNN(cudnnSetTensorNdDescriptor( dataTensor,
CUDNN_DATA_FLOAT,
3,
dataDims,
dataStride));
/////////////////////////////////////////////////////////////////////////////////////////
int n = 1;
int c = m_Params.ConvParams.output_channels;//20
int h = m_Params.ConvParams.outputs;//932
int w = 1;//unused

const int ConvDims = { n, c, h };

const int ConvStride = { c * h,h,1 };

checkCUDNN(cudnnSetTensorNdDescriptor( m_Descriptors.ConvDescriptors.ConvTensor,
CUDNN_DATA_FLOAT,
3,
ConvDims,
ConvStride));

const int ConvBiasDims = { 1, m_Params.ConvParams.output_channels, 1 };

const int ConvBiasStride = { m_Params.ConvParams.output_channels,1,1 };

checkCUDNN(cudnnSetTensorNdDescriptor( m_Descriptors.ConvDescriptors.ConvBiasTensor,
CUDNN_DATA_FLOAT,
3,
ConvBiasDims,
ConvBiasStride));

const int tempFilterDim = { m_Params.ConvParams.output_channels , m_Params.ConvParams.input_channels , m_Params.ConvParams.kernal_size };

checkCUDNN(cudnnSetFilterNdDescriptor( m_Descriptors.ConvDescriptors.ConvFilterDesc,
CUDNN_DATA_FLOAT,
CUDNN_TENSOR_NCHW,
3,
tempFilterDim));

int padding = 0;

checkCUDNN(cudnnSetConvolutionNdDescriptor( m_Descriptors.ConvDescriptors.ConvDesc,
1,//input dimension
&padding,//padding
&m_Params.ConvParams.stride,
&m_Params.ConvParams.dilation,
CUDNN_CROSS_CORRELATION,
CUDNN_DATA_FLOAT));

size_t temp_workspace_size = 0;
size_t max_workspace_size = 0;

cudnnConvolutionFwdAlgoPerf_t TempFwdAlgos[CUDNN_CONVOLUTION_FWD_ALGO_COUNT];

int returnedFwdAlgo = 0;

checkCUDNN(cudnnFindConvolutionForwardAlgorithm(*pcudnnHandle,
*previousTensor,
m_Descriptors.ConvDescriptors.ConvFilterDesc,
m_Descriptors.ConvDescriptors.ConvDesc,
m_Descriptors.ConvDescriptors.ConvTensor,
CUDNN_CONVOLUTION_FWD_ALGO_COUNT,
&returnedFwdAlgo,
TempFwdAlgos));

m_Descriptors.ConvDescriptors.ConvFwdAlgo = TempFwdAlgos[0].algo;

int OutputDims = 0;

checkCUDNN(cudnnGetConvolutionNdForwardOutputDim( m_Descriptors.ConvDescriptors.ConvDesc,
*previousTensor,
m_Descriptors.ConvDescriptors.ConvFilterDesc,
1,
&OutputDims));

checkCUDNN(cudnnGetConvolutionForwardWorkspaceSize( *pcudnnHandle,
*previousTensor,
m_Descriptors.ConvDescriptors.ConvFilterDesc,
m_Descriptors.ConvDescriptors.ConvDesc,
m_Descriptors.ConvDescriptors.ConvTensor,
m_Descriptors.ConvDescriptors.ConvFwdAlgo,
&temp_workspace_size));

For clarity, the sizes of the various tensors are

Data → 1,31,7471 Stride 231601, 7471, 1

Conv ->1,20,932 Stride 18640, 932, 1

ConvBias → 1,20,1 Stride 20, 1, 1

Filter(Nd)Desc → 20, 31, 7

Conv(Nd)Desc → Stride 8, Dilation 3.

I can see in the documentation that the 4D tensors and 2D convolutions have better support but using these with unused dimensions padded produced BAD_PARAMS. Using 3D tensors and the Nd versions of the convolutions seems to get me a little further. Couriously, when I call the cudnnGetConvolutionNdForwardOutputDim it returns a value of 1, which it shouldn’t. It should be 932 for this dilated convolution. In the documentation this function returns, in the 1D case, a value for the output width according to the formula;

outputDim = 1 + ( inputDim + 2*pad - (((filterDim-1)*dilation)+1) )/convolutionStride;

In my application I’m doing this calculation myself rather than relying on this function. It does seem to me that this formula is incorrect. I believe the correct formula should be;

outputDim = 1 + ( inputDim + 2*pad - (((filterDim+1)*dilation)-1) )/convolutionStride;

With the plus and minus reversed. You can convince yourself of this by considering a 1D dilated convolution with a single output, for Ex. with a FilterDim(kernal size) of 3, dilation 3. In this case, I believe, a single output should correspond to a receptive field(inputDim) of 11. A diagram would be helpful and if there is some doubt I will fire paint up.

I’m not sure whether this would account for my issue but I thought it worth mentioning. Perhaps this is a typo rather than an error in the API. That said, if the functions are expecting tensors of a certain size this could be my issue.

When I use the code above and call cudnnFindConvolutionForwardAlgorithm all the Algos return NOT_SUPPORTED. Given that the program is entirely FLOAT_CONFIG & NCHW it seems to me that IMPLICIT_GEMM should be supported.

If anyone can help me with a working setup for 1D convolutional networks that would be great.

System: Win10 x64

	Cuda 10.2.2 

	Cudnn 8.1 

	MSVS2019

Hi @mchl.hemingway ,
Apologies for the miss,
Are you still facing this issue?

Thanks!

I am doing the similar work for the 1D convolution.
I also encountered the same result.
CUDNN_STATUS_NOT_SUPPORT.
Do you have any example to support the 1D convolution using the Nd operation?