How does tensorrt deal with batchnorm layer in a Q/DQ Graph

I notice that the ONNX model exported with nvidia pytorch-quantization tool keeps the batchnorm layer.

image

But during the tensorrt engine building stage, according to the verbose log information, those batchnorm layers were registered first and then were removed.

...
[TensorRT] VERBOSE: Removing QuantizeLinear_261_quantize_scale_node
[TensorRT] VERBOSE: QDQ graph optimizer quantization pass - Generate quantized ops
[TensorRT] VERBOSE: Removing BatchNormalization_13
[TensorRT] VERBOSE: Removing BatchNormalization_29
[TensorRT] VERBOSE: Removing BatchNormalization_44
[TensorRT] VERBOSE: Removing BatchNormalization_60
[TensorRT] VERBOSE: Removing BatchNormalization_75
[TensorRT] VERBOSE: Removing BatchNormalization_91
[TensorRT] VERBOSE: Removing BatchNormalization_120
[TensorRT] VERBOSE: Removing BatchNormalization_106
[TensorRT] VERBOSE: Removing BatchNormalization_136
[TensorRT] VERBOSE: Removing BatchNormalization_151
[TensorRT] VERBOSE: Removing BatchNormalization_167
[TensorRT] VERBOSE: Removing BatchNormalization_196
[TensorRT] VERBOSE: Removing BatchNormalization_182
[TensorRT] VERBOSE: Removing BatchNormalization_212
[TensorRT] VERBOSE: Removing BatchNormalization_227
[TensorRT] VERBOSE: Removing BatchNormalization_243
[TensorRT] VERBOSE: Removing BatchNormalization_272
[TensorRT] VERBOSE: Removing BatchNormalization_258
[TensorRT] VERBOSE: Removing BatchNormalization_288
[TensorRT] VERBOSE: Removing BatchNormalization_303
[TensorRT] VERBOSE: QuantizeDoubleInputNodes: fusing (DequantizeLinear_5_quantize_scale_node and DequantizeLinear_11_quantize_scale_node) into Conv_12
[TensorRT] VERBOSE: Removing DequantizeLinear_5_quantize_scale_node
...

But there was no information indicating that the batchnorm was folded with the convolution layer, so is tensorrt just delete the batchnorm layer directly?