Fusion of convolution and BatchNorm

cudnn : 8.0
I found two ways to fuse Convolution and BatchNorm.

  1. Use CUDNN_FUSED_SCALE_BIAS_ACTIVATION_CONV_BNSTATSFusedOpsPlan,cudnnSetFusedOpsConstParamPackAttribute ,cudnnSetFusedOpsVariantParamPackAttribute,etc.
    I saw that the cudnn api document describes it like this: " As of cuDNN 7.6.0, if the conditions in Table 26 are met, then the fully fused fast path will be triggered. Otherwise, a slower partially fused path will be triggered."
    However when I set the input type of convolution to CUDNN_DATA_FLOAT, the code raise CUDNN_STATUS_BAD_PARAM .

2.Fuse OP by Backend API.
Does Scale_Bias_Activation_convolution_genStats correspond to Convolution-BatchNorm fusion? Does PSEUDO_HALF_CONFIG mean that I only use CUDNN_DATA_HALF input?

If I want to achieve Convolution-BatchNorm fusion, how to set the arguments?

Hi,

Please refer the following for cuDNN fuse operator sample.

For more details,

Thank you.

Regarding the CUDNN_STATUS_BAD_PARAM issue, @hbwx26 can you provide the API log?

Yes Scale_Bias_Activation_convolution_genStats is the forward fusion pattern to achieve conv-bn fusion. Another one you will need is Scale_Bias_Activation_ConvBwdFilter in the backward path as well.
PSEUDO_HALF_CONFIG means all the storage tensors are in FP16, and all the compute precision should be FP32.
Unfortunately we don’t have a frontend sample handy, we are working on one that we should be able to share soon

Hello,
I didn’t see CUDNN_BACKEND_OPERATION_GEN_STATS_DESCRIPTOR is used in the latest frontend codes. Is Scale_Bias_Activation_ConvBwdFilter pattern already supported by latest CuDNN-Backend? If yes, would recent frontend codes add corresponding sample?

Thanks
Gino

BTW, normally BatchNorm could be converted to FusedBatchNorm(equal to Scale + Bias) in fine-tune phase, then use Conv_Scale_Bias pattern in inference phase, so why bother to support Scale_Bias_Activation_convolution_genStats pattern? Is it only used in training’s forward phase?

Thanks
Gino