Workspacre size is zero

The function

cudnnGetConvolutionForwardWorkspaceSize()

can be used to return the required amount of memory in forward convolution. I tried using it as follows with cuDNN v7.6.0.

size_t workspace_bytes;
CHECK_CUDNN(cudnnGetConvolutionForwardWorkspaceSize(
      cudnnHandle, inputTensor, kernel, convolutionDescriptor, outputTensor,
      convolutionAlgorithm, &workspace_bytes));
std::cout << "The required size for performing the convolution is:\t"
            << ((workspace_bytes * 1.0) / (1024 * 1024)) << " MB" << std::endl;

However, the returned value is zero. CHECK_CUDNN is a macro that I wrote to ensure there is no CUDA error. Interestingly, the computation result is correct even though this function returns zero for the required memory. Any insight on why the returned value is zero is highly appreciated.

Did you ever figure out the problem? Same thing happening for me, but for not all members of my team

It looks like your convolution can be done without needing any additional memory, going straight from input tensor to output tensor.

Given the computation is correct, there is no problem.

Not all algorithms require a workspace. Also, anything less than a mb would print 0.