CUDNN_STATUS_NOT_SUPPORTED for cudnnConvolutionBiasActivationForward()

Happy new year!
I’m a graduate student who’ve just started looking into cuDNN to implement fused operators for my platform.
Currently, I’m trying to create any working code for cudnnConvolutionBiasActivationForward() and wrote its call by extending the working code for cudnnConvolutionForward() in cuDNN code sample.
However, it fails with CUDNN_STATUS_NOT_SUPPORTED message.

As cudnnConvolutionBiasActivationForward() performs y = act ( alpha1 * conv(x) + alpha2 * z + bias ), I set the dimension for ‘z’ and ‘bias’ equal to output dimension ‘y’.
Then, create activation descriptor with CUDNN_NOT_PROPAGATE_NAN and CUDNN_ACTIVATION_RELU.

To resolve issue, I went through possible failure scenarios described in the official doc ( https://docs.nvidia.com/deeplearning/cudnn/api/index.html#cudnnConvolutionBiasActivationForward), but it seems like I’m not violating any of them except this one: “The second stride of biasDesc is not equal to one.”
Possibly I’m also doing it right, but not sure how to check this property. To be honest, I think I don’t have a good understanding on why each tensor descriptor also should have stride information since this feels more like a convolution property rather than input data property to me.

Any advice or thought will be greatly appreciated.
Or if you can share any valid working code example, that will be very helpful. (Tried to find one, couldn’t find any tutorial type of code.)

This is my current code.

// Define tensors: Idesc (Input), Odesc(Ouptut), Bdesc(Bias), Zdesc(Z)
checkCudnnErr(cudnnSetTensorNdDescriptor(cudnnIdesc, dataType, convDim + 2, dimA_padded, strideA_padded));
checkCudnnErr(cudnnSetTensorNdDescriptor(cudnnOdesc, dataType, convDim + 2, outdimA_padded, outstrideA_padded));
checkCudnnErr(cudnnSetTensorNdDescriptor(cudnnBdesc, dataType, convDim + 2, outdimA_padded, outstrideA_padded));
checkCudnnErr(cudnnSetTensorNdDescriptor(cudnnZdesc, dataType, convDim + 2, outdimA_padded, outstrideA_padded));
checkCudnnErr(cudnnSetConvolutionNdDescriptor(cudnnConvDesc, convDim, padA, convstrideA, dilationA, mode, dataType));
checkCudnnErr(cudnnSetFilterNdDescriptor(cudnnFdesc, dataType, filterFormat, convDim + 2, filterdimA_padded));
checkCudnnErr(cudnnSetActivationDescriptor(cudnnActvDesc, CUDNN_ACTIVATION_RELU, CUDNN_NOT_PROPAGATE_NAN,std::numeric_limits::max());
checkCudaErr(cudaMemcpy(devPtrReorderedF, devPtrF, sizeof(devPtrF[0]) * filtersize, cudaMemcpyDeviceToDevice));
if (mathType == 1) {
checkCudnnErr(cudnnSetConvolutionMathType(cudnnConvDesc, CUDNN_TENSOR_OP_MATH));
}
cudnnConvolutionFwdAlgo_t algo = CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM;
checkCudnnErr(cudnnGetConvolutionForwardWorkspaceSize(
handle_, cudnnIdesc, cudnnFdesc, cudnnConvDesc, cudnnOdesc, algo, &workSpaceSize));
if (workSpaceSize > 0) {
checkCudaErr(cudaMalloc(&workSpace, workSpaceSize));
}
checkCudnnErr(cudnnConvolutionBiasActivationForward(…) // pass arguments accordingly

Hi @sunggg,
Can you please check if the below link helps resolving your query

Thanks!

Hi @AakankshaS, thanks for the suggestion.
I tried all algorithms for conv_forward, but none of them worked unfortunately.

Resolved.
Dimension of bias wasn’t matched.

This helped.