CLIPPED_RELU vs cudnnConvolutionBiasActivationForward()

Hi,

I was trying to optimize execution of Con2D->Bias->ReLU_with_upper_limit (e.g. ReLU6) with cudnnConvolutionBiasActivationForward(). I have noticed that cudnnConvolutionBiasActivationForward behavior is not consistent with its documentation. It should return NOT_SUPPORTED in case of CLIPPED_RELU activation. However, it does not and executes without an error. On the other hand, such operation ignores clipping threshold for activation and behaves like a typical ReLU.

So, where is the problem? cudnnConvolutionBiasActivationForward should return NOT_SUPPORTED for CLIPPING_RELU or it should correctly clip values? Is there a way to optimize Con2D->Bias->ReLU6 operation on GPUs with ComputeCapability<7.5? I was trying to use Backend API, but it ended in very similar way: no error but not-clipped results.

And by the way: from documentation of coef parameter in cudnnSetActivationDescriptor() :

Input . Floating point number. When the activation mode (see cudnnActivationMode_t) is set to CUDNN_ACTIVATION_CLIPPED_RELU, this input specifies the clipping threshold; and when the activation mode is set to CUDNN_ACTIVATION_RELU, this input specifies the upper bound.

What is the upper bound for CUDNN_ACTIVATION_RELU?

My specs:
Windows 10 with CUDA 11.1and cuDNN 8.1.1.
GTX1060 6GB (driver: 460.89)

Hi @abartoszek ,
Apologies for delayed response.
I am checking on the same.
Thank you for your patience.

Hi @abartoszek ,
Are you still facing the issue?

Thanks!

Yes, I am still experiencing this issue after updating CUDA to 11.2, cuDNN to 8.2.1, and driver to 471.11. However, I have noticed that cudnnConvolutionBiasActivationForward returns NOT_SUPPORTED in case of CLIPPING_RELU, which is consistent with documentation.

Still, description of “coeff” argument in cudnnSetActivationDescriptor is incomprehensible (at least for me) and inconsistent with similar description in cudnnGetActivationDescriptor.

1 Like

I came across this as well. And the documentation is still contradicting itself:
3.2.37. cudnnGetActivationDescriptor()
coef
Output . Floating point number to specify the clipping threshold when the activation mode is set to
CUDNN_ACTIVATION_CLIPPED_RELU or to specify the alpha coefficient when the activation mode is set to
CUDNN_ACTIVATION_ELU.

3.2.77. cudnnSetActivationDescriptor()
coef
Input . Floating point number. When the activation mode (refer to [cudnnActivationMode_t]
(API Reference :: NVIDIA Deep Learning cuDNN Documentation)) is set to
CUDNN_ACTIVATION_CLIPPED_RELU, this input specifies the clipping threshold; and when the activation mode is set
to CUDNN_ACTIVATION_RELU, this input specifies the upper bound.

I would not expect ReLU to use any scaling factor or threshold. Common definition of ReLu is the same that is also given in documentation of RNN (7.1.2.8. cudnnRNNMode_t):
ReLU (x) = max (x, 0).

So what does ReLu do? is coeff used for ReLu and for what exactly (formula)?