Any guide to support int8 group_conv

I want to implement an int8 group_conv kernel by cutlass. However, there are some questions I was confused.

  1. cutlass group conv only support nhwc format, while according to Developer Guide :: NVIDIA Deep Learning TensorRT Documentation 6.10 section, trt plugins will only ask int8 + nchw, is there any way to bypass it?
  2. to the bias format, I print all format combs in supportFormatCombination function, I found it only support fp16 or fp32 bias. Is it expected, or I made some mistakes. (I want int32 bias datatype)