TensorRT generated QAT engine, why the engine is bigger than pretrained fp16 engine?

Description

I have generated qat engine by ./trtexec --onnx={my pytorch-quantization toolkit onnx model} --saveEngine={my qat engine} --int8 --plugin=={my built TensorRT plugin so package}

Environment

TensorRT Version: 8005
GPU Type: T4
Nvidia Driver Version: 460.91.03
CUDA Version: 11.4
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.8
TensorFlow Version (if applicable): None
PyTorch Version (if applicable): 1.10.0
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorrt:21.10-py3

Relevant Files

my qat onnx:
yolov5s-6.0-qat-yolo-op.onnx (27.8 MB)

my generated engine comp:
1637057785(1)

Steps To Reproduce

Generate TensorRT engine:

./trtexec --onnx=yolov5s-6.0-qat-yolo-op.onnx --workspace=10240 --int8 --saveEngine=/root/yolov5s-6.0-qat-int8.engine --plugins=/root/YoloLayer_TRT_v6.0/build/libyolo.so

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I have checked the onnx model.
this is when i export qat onnx log

Creating ONNX file: checkpoint/yolov5s-6.0-qat.onnx
W1117 16:31:33.838725 140321547626240 tensor_quantizer.py:280] Use Pytorch's native experimental fake quantization.
/home/zhangbo/.virtualenvs/py38/lib/python3.8/site-packages/pytorch_quantization/nn/modules/tensor_quantizer.py:285: TracerWarning: Conver ting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will  be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  inputs, amax.item() / bound, 0,
/home/zhangbo/.virtualenvs/py38/lib/python3.8/site-packages/pytorch_quantization/nn/modules/tensor_quantizer.py:291: TracerWarning: Conver ting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value wil l be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  quant_dim = list(amax.shape).index(list(amax_sequeeze.shape)[0])
/home/zhangbo/.virtualenvs/py38/lib/python3.8/site-packages/pytorch_quantization/nn/modules/tensor_quantizer.py:285: TracerWarning: Conver ting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will  be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  inputs, amax.item() / bound, 0,
/home/zhangbo/.virtualenvs/py38/lib/python3.8/site-packages/pytorch_quantization/nn/modules/tensor_quantizer.py:291: TracerWarning: Conver ting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value wil l be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  quant_dim = list(amax.shape).index(list(amax_sequeeze.shape)[0])
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please c onsider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please c onsider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please c onsider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please c onsider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please c onsider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please c onsider adding it in symbolic function.

This when i build engine log:

root@d741691190a8:/workspace/tensorrt/bin# ./trtexec --onnx=/root/yolov5s.onnx --workspace=10240 --int8 --saveEngine=/root/yolov5s-6.0-qat-int8-coco.engine --plugins=/root/workspace/plugins/YoloLayer_TRT_v6.0/build/libyolo.so --verbose
&&&& RUNNING TensorRT.trtexec [TensorRT v8003] # ./trtexec --onnx=/root/yolov5s.onnx --workspace=10240 --int8 --saveEngine=/root/yolov5s-6.0-qat-int8-coco.engine --plugins=/root/workspace/plugins/YoloLayer_TRT_v6.0/build/libyolo.so --verbose
[11/17/2021-17:31:09] [I] === Model Options ===
[11/17/2021-17:31:09] [I] Format: ONNX
[11/17/2021-17:31:09] [I] Model: /root/yolov5s.onnx
[11/17/2021-17:31:09] [I] Output:
[11/17/2021-17:31:09] [I] === Build Options ===
[11/17/2021-17:31:09] [I] Max batch: explicit
[11/17/2021-17:31:09] [I] Workspace: 10240 MiB
[11/17/2021-17:31:09] [I] minTiming: 1
[11/17/2021-17:31:09] [I] avgTiming: 8
[11/17/2021-17:31:09] [I] Precision: FP32+INT8
[11/17/2021-17:31:09] [I] Calibration: Dynamic
[11/17/2021-17:31:09] [I] Refit: Disabled
[11/17/2021-17:31:09] [I] Sparsity: Disabled
[11/17/2021-17:31:09] [I] Safe mode: Disabled
[11/17/2021-17:31:09] [I] Restricted mode: Disabled
[11/17/2021-17:31:09] [I] Save engine: /root/yolov5s-6.0-qat-int8-coco.engine
[11/17/2021-17:31:09] [I] Load engine:
[11/17/2021-17:31:09] [I] NVTX verbosity: 0
[11/17/2021-17:31:09] [I] Tactic sources: Using default tactic sources
[11/17/2021-17:31:09] [I] timingCacheMode: local
[11/17/2021-17:31:09] [I] timingCacheFile:
[11/17/2021-17:31:09] [I] Input(s)s format: fp32:CHW
[11/17/2021-17:31:09] [I] Output(s)s format: fp32:CHW
[11/17/2021-17:31:09] [I] Input build shapes: model
[11/17/2021-17:31:09] [I] Input calibration shapes: model
[11/17/2021-17:31:09] [I] === System Options ===
[11/17/2021-17:31:09] [I] Device: 0
[11/17/2021-17:31:09] [I] DLACore:
[11/17/2021-17:31:09] [I] Plugins: /root/workspace/plugins/YoloLayer_TRT_v6.0/build/libyolo.so
[11/17/2021-17:31:09] [I] === Inference Options ===
[11/17/2021-17:31:09] [I] Batch: Explicit
[11/17/2021-17:31:09] [I] Input inference shapes: model
[11/17/2021-17:31:09] [I] Iterations: 10
[11/17/2021-17:31:09] [I] Duration: 3s (+ 200ms warm up)
[11/17/2021-17:31:09] [I] Sleep time: 0ms
[11/17/2021-17:31:09] [I] Streams: 1
[11/17/2021-17:31:09] [I] ExposeDMA: Disabled
[11/17/2021-17:31:09] [I] Data transfers: Enabled
[11/17/2021-17:31:09] [I] Spin-wait: Disabled
[11/17/2021-17:31:09] [I] Multithreading: Disabled
[11/17/2021-17:31:09] [I] CUDA Graph: Disabled
[11/17/2021-17:31:09] [I] Separate profiling: Disabled
[11/17/2021-17:31:09] [I] Time Deserialize: Disabled
[11/17/2021-17:31:09] [I] Time Refit: Disabled
[11/17/2021-17:31:09] [I] Skip inference: Disabled
[11/17/2021-17:31:09] [I] Inputs:
[11/17/2021-17:31:09] [I] === Reporting Options ===
[11/17/2021-17:31:09] [I] Verbose: Enabled
[11/17/2021-17:31:09] [I] Averages: 10 inferences
[11/17/2021-17:31:09] [I] Percentile: 99
[11/17/2021-17:31:09] [I] Dump refittable layers:Disabled
[11/17/2021-17:31:09] [I] Dump output: Disabled
[11/17/2021-17:31:09] [I] Profile: Disabled
[11/17/2021-17:31:09] [I] Export timing to JSON file:
[11/17/2021-17:31:09] [I] Export output to JSON file:
[11/17/2021-17:31:09] [I] Export profile to JSON file:
[11/17/2021-17:31:09] [I]
[11/17/2021-17:31:09] [I] === Device Information ===
[11/17/2021-17:31:09] [I] Selected Device: Tesla T4
[11/17/2021-17:31:09] [I] Compute Capability: 7.5
[11/17/2021-17:31:09] [I] SMs: 40
[11/17/2021-17:31:09] [I] Compute Clock Rate: 1.59 GHz
[11/17/2021-17:31:09] [I] Device Global Memory: 15109 MiB
[11/17/2021-17:31:09] [I] Shared Memory per SM: 64 KiB
[11/17/2021-17:31:09] [I] Memory Bus Width: 256 bits (ECC enabled)
[11/17/2021-17:31:09] [I] Memory Clock Rate: 5.001 GHz
[11/17/2021-17:31:09] [I]
[11/17/2021-17:31:09] [I] TensorRT version: 8003
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::NMS_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::Reorg_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::Region_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::Clip_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::LReLU_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::Normalize_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::ScatterND version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::RPROI_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::CropAndResize version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::Proposal version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::Split version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[11/17/2021-17:31:09] [I] Loading supplied plugin library: /root/workspace/plugins/YoloLayer_TRT_v6.0/build/libyolo.so
[11/17/2021-17:31:09] [I] [TRT] [MemUsageChange] Init CUDA: CPU +328, GPU +0, now: CPU 335, GPU 1206 (MiB)
[11/17/2021-17:31:09] [I] Start parsing network model
[11/17/2021-17:31:09] [I] [TRT] ----------------------------------------------------------------
[11/17/2021-17:31:09] [I] [TRT] Input filename:   /root/yolov5s.onnx
[11/17/2021-17:31:09] [I] [TRT] ONNX IR version:  0.0.7
[11/17/2021-17:31:09] [I] [TRT] Opset version:    13
[11/17/2021-17:31:09] [I] [TRT] Producer name:    pytorch
[11/17/2021-17:31:09] [I] [TRT] Producer version: 1.10
[11/17/2021-17:31:09] [I] [TRT] Domain:
[11/17/2021-17:31:09] [I] [TRT] Model version:    0
[11/17/2021-17:31:09] [I] [TRT] Doc string:
[11/17/2021-17:31:09] [I] [TRT] ----------------------------------------------------------------
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::GridAnchor_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::GridAnchorRect_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::NMS_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::Reorg_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::Region_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::Clip_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::LReLU_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::PriorBox_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::Normalize_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::ScatterND version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::RPROI_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::BatchedNMS_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::BatchedNMSDynamic_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::FlattenConcat_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::CropAndResize version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::DetectionLayer_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::EfficientNMS_ONNX_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::EfficientNMS_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::Proposal version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::ProposalLayer_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::PyramidROIAlign_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::ResizeNearest_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::Split version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::SpecialSlice_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 1
[11/17/2021-17:31:09] [V] [TRT] Adding network input: inputs.1 with dtype: float32, dimensions: (1, 3, 640, 640)
[11/17/2021-17:31:09] [V] [TRT] Registering tensor: inputs.1 for ONNX tensor: inputs.1
[11/17/2021-17:31:09] [V] [TRT] Importing initializer: model.0.conv.weight
[11/17/2021-17:31:09] [V] [TRT] Importing initializer: model.0.bn.weight
[11/17/2021-17:31:09] [V] [TRT] Importing initializer: model.0.bn.bias
[11/17/2021-17:31:09] [V] [TRT] Importing initializer: model.0.bn.running_mean
[11/17/2021-17:31:09] [V] [TRT] Importing initializer: model.0.bn.running_var
[11/17/2021-17:31:09] [V] [TRT] Importing initializer: model.1.conv.weight
[11/17/2021-17:31:09] [V] [TRT] Importing initializer: model.1.bn.weight
[11/17/2021-17:31:09] [V] [TRT] Importing initializer: model.1.bn.bias
.....
[11/17/2021-17:31:10] [V] [TRT] DequantizeLinear_794 [DequantizeLinear] inputs: [1391 -> (255, 512, 1, 1)[FLOAT]], [1388 -> (255)[FLOAT]], [1455 -> (255)[INT8]],
[11/17/2021-17:31:10] [V] [TRT] Registering tensor: 1392 for ONNX tensor: 1392
[11/17/2021-17:31:10] [V] [TRT] DequantizeLinear_794 [DequantizeLinear] outputs: [1392 -> (255, 512, 1, 1)[FLOAT]],
[11/17/2021-17:31:10] [V] [TRT] Parsing node: Conv_795 [Conv]
[11/17/2021-17:31:10] [V] [TRT] Searching for input: 1387
[11/17/2021-17:31:10] [V] [TRT] Searching for input: 1392
[11/17/2021-17:31:10] [V] [TRT] Searching for input: model.24.m.2.bias
[11/17/2021-17:31:10] [V] [TRT] Conv_795 [Conv] inputs: [1387 -> (1, 512, 20, 20)[FLOAT]], [1392 -> (255, 512, 1, 1)[FLOAT]], [model.24.m.2.bias -> (255)[FLOAT]],
[11/17/2021-17:31:10] [V] [TRT] Convolution input dimensions: (1, 512, 20, 20)
[11/17/2021-17:31:10] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call.
[11/17/2021-17:31:10] [V] [TRT] Registering layer: Conv_795 for ONNX node: Conv_795
[11/17/2021-17:31:10] [V] [TRT] Registering tensor: 1393 for ONNX tensor: 1393
[11/17/2021-17:31:10] [V] [TRT] Conv_795 [Conv] outputs: [1393 -> (1, 255, 20, 20)[FLOAT]],
[11/17/2021-17:31:10] [V] [TRT] Parsing node: YoloLayer_TRT_0 [YoloLayer_TRT]
[11/17/2021-17:31:10] [V] [TRT] Searching for input: 1369
[11/17/2021-17:31:10] [V] [TRT] Searching for input: 1381
[11/17/2021-17:31:10] [V] [TRT] Searching for input: 1393
[11/17/2021-17:31:10] [V] [TRT] YoloLayer_TRT_0 [YoloLayer_TRT] inputs: [1369 -> (1, 255, 80, 80)[FLOAT]], [1381 -> (1, 255, 40, 40)[FLOAT]], [1393 -> (1, 255, 20, 20)[FLOAT]],
[11/17/2021-17:31:10] [I] [TRT] No importer registered for op: YoloLayer_TRT. Attempting to import as plugin.
[11/17/2021-17:31:10] [I] [TRT] Searching for plugin: YoloLayer_TRT, plugin_version: 1, plugin_namespace:
[11/17/2021-17:31:10] [I] [TRT] Successfully created plugin: YoloLayer_TRT
[11/17/2021-17:31:10] [V] [TRT] Registering layer: YoloLayer_TRT_0 for ONNX node: YoloLayer_TRT_0
[11/17/2021-17:31:10] [V] [TRT] Registering tensor: output_0 for ONNX tensor: output
[11/17/2021-17:31:10] [V] [TRT] YoloLayer_TRT_0 [YoloLayer_TRT] outputs: [output -> (1, 6001, 1, 1)[FLOAT]],
[11/17/2021-17:31:10] [V] [TRT] Marking output_0 as output: output
[11/17/2021-17:31:10] [I] Finish parsing network model
[11/17/2021-17:31:10] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 367, GPU 1208 (MiB)
[11/17/2021-17:31:10] [I] FP32 and INT8 precisions have been specified - more performance might be enabled by additionally specifying --fp16 or --best
[11/17/2021-17:31:10] [I] [TRT] [MemUsageSnapshot] Builder begin: CPU 367 MiB, GPU 1214 MiB
[11/17/2021-17:31:10] [W] [TRT] Calibrator won't be used in explicit precision mode. Use quantization aware training to generate network with Quantize/Dequantize nodes.
[11/17/2021-17:31:10] [V] [TRT] Applying generic optimizations to the graph for inference.
[11/17/2021-17:31:10] [V] [TRT] Original: 1037 layers
[11/17/2021-17:31:10] [V] [TRT] After dead-layer removal: 1037 layers
[11/17/2021-17:31:10] [V] [TRT] QDQ graph optimizer - constant folding of Q/DQ initializers
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 1) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 0) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 25) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 24) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 59) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 58) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 94) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 93) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 129) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 128) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 163) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 162) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 198) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 197) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 233) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 232) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 268) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 267) [Constant]
........
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 959) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 963) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 962) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 929) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 928) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 946) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 945) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 981) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 980) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 1026) [Constant]
[11/17/2021-17:31:10] [V] [TRT] Removing (Unnamed Layer* 1025) [Constant]
[11/17/2021-17:31:10] [V] [TRT] After Myelin optimization: 557 layers
[11/17/2021-17:31:10] [V] [TRT] After scale fusion: 557 layers
[11/17/2021-17:31:10] [V] [TRT] QDQ graph optimizer - constant folding of Q/DQ initializers
[11/17/2021-17:31:10] [V] [TRT] QDQ graph optimizer forward pass - DQ motions and fusions
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.0.conv.weight with QuantizeLinear_7_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.1.conv.weight with QuantizeLinear_20_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.2.cv1.conv.weight with QuantizeLinear_33_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.2.m.0.cv1.conv.weight with QuantizeLinear_46_quantize_scale_node
....
QuantizeLinear_408_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.9.cv1.conv.weight with QuantizeLinear_421_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.9.cv2.conv.weight with QuantizeLinear_438_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.10.conv.weight with QuantizeLinear_451_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.13.cv1.conv.weight with QuantizeLinear_466_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.13.m.0.cv1.conv.weight with QuantizeLinear_479_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.13.m.0.cv2.conv.weight with QuantizeLinear_492_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.13.cv2.conv.weight with QuantizeLinear_505_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.13.cv3.conv.weight with QuantizeLinear_519_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.14.conv.weight with QuantizeLinear_532_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.17.cv1.conv.weight with QuantizeLinear_547_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.17.m.0.cv1.conv.weight with QuantizeLinear_560_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.17.m.0.cv2.conv.weight with QuantizeLinear_573_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.17.cv2.conv.weight with QuantizeLinear_586_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.17.cv3.conv.weight with QuantizeLinear_600_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.18.conv.weight with QuantizeLinear_613_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.20.cv1.conv.weight with QuantizeLinear_627_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.20.m.0.cv1.conv.weight with QuantizeLinear_640_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.20.m.0.cv2.conv.weight with QuantizeLinear_653_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.20.cv2.conv.weight with QuantizeLinear_666_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.20.cv3.conv.weight with QuantizeLinear_680_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.21.conv.weight with QuantizeLinear_693_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.23.cv1.conv.weight with QuantizeLinear_707_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.23.m.0.cv1.conv.weight with QuantizeLinear_720_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.23.m.0.cv2.conv.weight with QuantizeLinear_733_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.23.cv2.conv.weight with QuantizeLinear_746_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.23.cv3.conv.weight with QuantizeLinear_760_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.24.m.0.weight with QuantizeLinear_773_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.24.m.1.weight with QuantizeLinear_783_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] ConstWeightsQuantizeFusion: Fusing model.24.m.2.weight with QuantizeLinear_793_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Eliminating QuantizeLinear_68_quantize_scale_node which duplicates (Q) QuantizeLinear_28_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Removing QuantizeLinear_68_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Eliminating QuantizeLinear_175_quantize_scale_node which duplicates (Q) QuantizeLinear_108_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Removing QuantizeLinear_175_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Eliminating QuantizeLinear_309_quantize_scale_node which duplicates (Q) QuantizeLinear_215_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Removing QuantizeLinear_309_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Eliminating QuantizeLinear_389_quantize_scale_node which duplicates (Q) QuantizeLinear_349_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Removing QuantizeLinear_389_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Eliminating QuantizeLinear_500_quantize_scale_node which duplicates (Q) QuantizeLinear_461_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Removing QuantizeLinear_500_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Eliminating QuantizeLinear_581_quantize_scale_node which duplicates (Q) QuantizeLinear_542_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Removing QuantizeLinear_581_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Eliminating QuantizeLinear_768_quantize_scale_node which duplicates (Q) QuantizeLinear_608_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Removing QuantizeLinear_768_quantize_scale_node
[11/17/2021-17:31:10] [V] [TRT] Eliminating QuantizeLinear_661_quantize_scale_node which duplicates (Q) QuantizeLinear_622_quantize_scale_node
...
[11/17/2021-17:31:10] [V] [TRT] Removing QuantizeLinear_433_quantize_scale_node_clone_0
[11/17/2021-17:31:10] [V] [TRT] QDQ graph optimizer quantization pass - Generate quantized ops
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_11 with Mul_12
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_24 with Mul_25
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_37 with Mul_38
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_77 with Mul_78
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_50 with Mul_51
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_63 with Mul_64
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing PWN(Sigmoid_63, Mul_64) with Add_65
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_91 with Mul_92
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_104 with Mul_105
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_117 with Mul_118
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_184 with Mul_185
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_130 with Mul_131
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_143 with Mul_144
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing PWN(Sigmoid_143, Mul_144) with Add_145
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_157 with Mul_158
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_170 with Mul_171
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing PWN(Sigmoid_170, Mul_171) with Add_172
....
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_536 with Mul_537
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_551 with Mul_552
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_590 with Mul_591
[11/17/2021-17:31:10] [V] [TRT] PointWiseFusion: Fusing Sigmoid_564 with Mul_565
....
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_10
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_23
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_36
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_76
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_49
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_62
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_90
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_103
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_116
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_183
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_129
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_142
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_156
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_169
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_197
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_210
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_223
...
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_508
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_482
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_495
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_522
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_535
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_550
[11/17/2021-17:31:10] [V] [TRT] Removing BatchNormalization_589
.....
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_9 with PWN(Sigmoid_11, Mul_12)
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_22 with PWN(Sigmoid_24, Mul_25)
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_35 with PWN(Sigmoid_37, Mul_38)
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_75 with PWN(Sigmoid_77, Mul_78)
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_48 with PWN(Sigmoid_50, Mul_51)
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_89 with PWN(Sigmoid_91, Mul_92)
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_102 with PWN(Sigmoid_104, Mul_105)
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_115 with PWN(Sigmoid_117, Mul_118)
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_182 with PWN(Sigmoid_184, Mul_185)
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_128 with PWN(Sigmoid_130, Mul_131)
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_155 with PWN(Sigmoid_157, Mul_158)
[11/17/2021-17:31:10] [V] [TRT] GenericConvActFusionBase: Fusing Conv_196 with PWN(Sigmoid_198, Mul_199)
...
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_562 with PWN(Sigmoid_564, Mul_565)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_575 with PWN(Sigmoid_577, Mul_578)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_602 with PWN(Sigmoid_604, Mul_605)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_615 with PWN(Sigmoid_617, Mul_618)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_629 with PWN(Sigmoid_631, Mul_632)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_668 with PWN(Sigmoid_670, Mul_671)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_642 with PWN(Sigmoid_644, Mul_645)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_655 with PWN(Sigmoid_657, Mul_658)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_682 with PWN(Sigmoid_684, Mul_685)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_695 with PWN(Sigmoid_697, Mul_698)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_709 with PWN(Sigmoid_711, Mul_712)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_748 with PWN(Sigmoid_750, Mul_751)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_722 with PWN(Sigmoid_724, Mul_725)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_735 with PWN(Sigmoid_737, Mul_738)
[11/17/2021-17:31:11] [V] [TRT] GenericConvActFusionBase: Fusing Conv_762 with PWN(Sigmoid_764, Mul_765)
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantizeLinear_15_quantize_scale_node into Conv_9 + PWN(Sigmoid_11, Mul_12)
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantizeLinear_5_quantize_scale_node and DequantizeLinear_8_quantize_scale_node) into Conv_9 + PWN(Sigmoid_11, Mul_12)
[11/17/2021-17:31:11] [V] [TRT] Removing QuantizeLinear_15_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_5_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_8_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantizeLinear_28_quantize_scale_node into Conv_22 + PWN(Sigmoid_24, Mul_25)
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantizeLinear_18_quantize_scale_node and DequantizeLinear_21_quantize_scale_node) into Conv_22 + PWN(Sigmoid_24, Mul_25)
[11/17/2021-17:31:11] [V] [TRT] Removing QuantizeLinear_28_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_18_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_21_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantizeLinear_31_quantize_scale_node and DequantizeLinear_34_quantize_scale_node) into Conv_35 + PWN(Sigmoid_37, Mul_38)
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_31_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_34_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantizeLinear_54_quantize_scale_node into Conv_48 + PWN(Sigmoid_50, Mul_51)
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantizeLinear_44_quantize_scale_node and DequantizeLinear_47_quantize_scale_node) into Conv_48 + PWN(Sigmoid_50, Mul_51)
[11/17/2021-17:31:11] [V] [TRT] Removing QuantizeLinear_54_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_44_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_47_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantizeLinear_57_quantize_scale_node and DequantizeLinear_60_quantize_scale_node) into Conv_61
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_57_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_60_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantizeLinear_95_quantize_scale_node into Conv_89 + PWN(Sigmoid_91, Mul_92)
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantizeLinear_85_quantize_scale_node and DequantizeLinear_88_quantize_scale_node) into Conv_89 + PWN(Sigmoid_91, Mul_92)
[11/17/2021-17:31:11] [V] [TRT] Removing QuantizeLinear_95_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_85_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_88_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantizeLinear_108_quantize_scale_node into Conv_102 + PWN(Sigmoid_104, Mul_105)
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantizeLinear_98_quantize_scale_node and DequantizeLinear_101_quantize_scale_node) into Conv_102 + PWN(Sigmoid_104, Mul_105)
[11/17/2021-17:31:11] [V] [TRT] Removing QuantizeLinear_108_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_98_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_101_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantizeLinear_111_quantize_scale_node and DequantizeLinear_114_quantize_scale_node) into Conv_115 + PWN(Sigmoid_117, Mul_118)
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_111_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_114_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantizeLinear_134_quantize_scale_node into Conv_128 + PWN(Sigmoid_130, Mul_131)
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantizeLinear_124_quantize_scale_node and DequantizeLinear_127_quantize_scale_node) into Conv_128 + PWN(Sigmoid_130, Mul_131)
[11/17/2021-17:31:11] [V] [TRT] Removing QuantizeLinear_134_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_124_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_127_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantizeLinear_137_quantize_scale_node and DequantizeLinear_140_quantize_scale_node) into Conv_141
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_137_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_140_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantizeLinear_161_quantize_scale_node into Conv_155 + PWN(Sigmoid_157, Mul_158)
[11/17/2021-17:31:11] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantizeLinear_151_quantize_scale_node and DequantizeLinear_154_quantize_scale_node) into Conv_155 + PWN(Sigmoid_157, Mul_158)
[11/17/2021-17:31:11] [V] [TRT] Removing QuantizeLinear_161_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_151_quantize_scale_node
[11/17/2021-17:31:11] [V] [TRT] Removing DequantizeLinear_154_quantize_scale_node
[11/17/2021-17:32:19] [V] [TRT] Tactic: -2315590536674553176 Time: 0.05212
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_linkable_nn_v1 Tactic: -1610552211426761663
[11/17/2021-17:32:19] [V] [TRT] Tactic: -1610552211426761663 Time: 0.07944
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_linkable_nn_v1 Tactic: -248137846452159671
[11/17/2021-17:32:19] [V] [TRT] Tactic: -248137846452159671 Time: 0.081044
[11/17/2021-17:32:19] [V] [TRT] Fastest Tactic: -2315590536674553176 Time: 0.05212
[11/17/2021-17:32:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -2315590536674553176
[11/17/2021-17:32:19] [V] [TRT] *************** Autotuning format combination: Int8(6400,400:32,20,1) -> Int8(6400,400:32,20,1) ***************
[11/17/2021-17:32:19] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:19] [V] [TRT] --------------- Timing Runner: model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) (CudaGroupConvolution)
[11/17/2021-17:32:19] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:19] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:19] [V] [TRT] --------------- Timing Runner: model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) (CudaDepthwiseConvolution)
[11/17/2021-17:32:19] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:19] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:19] [V] [TRT] --------------- Timing Runner: model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) (FusedConvActConvolution)
[11/17/2021-17:32:19] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:19] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:19] [V] [TRT] --------------- Timing Runner: model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) (CaskConvolution)
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_small_linkable_nt_v1 Tactic: 2512930805881575648
[11/17/2021-17:32:19] [V] [TRT] Tactic: 2512930805881575648 Time: 0.04234
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_256x128_ldg16_relu_interior_linkable_nt_v1 Tactic: 2871659474067289465
[11/17/2021-17:32:19] [V] [TRT] Tactic: 2871659474067289465 Time: 0.056372
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_128x128_ldg16_relu_interior_linkable_nt_v1 Tactic: 4069546629816122384
[11/17/2021-17:32:19] [V] [TRT] Tactic: 4069546629816122384 Time: 0.0402
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_linkable_nt_v1 Tactic: 6816729252632143345
[11/17/2021-17:32:19] [V] [TRT] Tactic: 6816729252632143345 Time: 0.041252
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_linkable_nt_v1 Tactic: 7132003274270369280
[11/17/2021-17:32:19] [V] [TRT] Tactic: 7132003274270369280 Time: 0.05662
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_interior_linkable_nt_v1 Tactic: 9106185976251181685
[11/17/2021-17:32:19] [V] [TRT] Tactic: 9106185976251181685 Time: 0.042984
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_linkable_nt_v1 Tactic: -6726468868693156247
[11/17/2021-17:32:19] [V] [TRT] Tactic: -6726468868693156247 Time: 0.043136
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_linkable_nt_v1 Tactic: -5751220628383722328
[11/17/2021-17:32:19] [V] [TRT] Tactic: -5751220628383722328 Time: 0.039976
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_128x128_ldg16_relu_small_linkable_nt_v1 Tactic: -3932383927815593719
[11/17/2021-17:32:19] [V] [TRT] Tactic: -3932383927815593719 Time: 0.03916
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_interior_linkable_nt_v1 Tactic: -1700814155385235755
[11/17/2021-17:32:19] [V] [TRT] Tactic: -1700814155385235755 Time: 0.041004
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_256x128_ldg16_relu_small_linkable_nt_v1 Tactic: -1611809172220379050
[11/17/2021-17:32:19] [V] [TRT] Tactic: -1611809172220379050 Time: 0.056532
[11/17/2021-17:32:19] [V] [TRT] model.8.cv3.conv.weight + QuantizeLinear_408_quantize_scale_node + Conv_410 + PWN(Sigmoid_412, Mul_413) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_linkable_nt_v1 Tactic: -933148721750444463
[11/17/2021-17:32:19] [V] [TRT] Tactic: -933148721750444463 Time: 0.041016
[11/17/2021-17:32:19] [V] [TRT] Fastest Tactic: -3932383927815593719 Time: 0.03916
[11/17/2021-17:32:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -3932383927815593719
[11/17/2021-17:32:19] [V] [TRT] *************** Autotuning Reformat:Int8(51200,400:4,20,1) -> Int8(6400,400:32,20,1) ***************
[11/17/2021-17:32:19] [V] [TRT] *************** Autotuning Reformat:Int8(6400,400:32,20,1) -> Int8(51200,400:4,20,1) ***************
[11/17/2021-17:32:19] [V] [TRT] *************** Autotuning format combination: Int8(51200,400:4,20,1) -> Int8(25600,400:4,20,1) ***************
[11/17/2021-17:32:19] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:19] [V] [TRT] --------------- Timing Runner: model.9.cv1.conv.weight + QuantizeLinear_421_quantize_scale_node + Conv_423 + PWN(Sigmoid_425, Mul_426) (CudaDepthwiseConvolution)
[11/17/2021-17:32:19] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:19] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:19] [V] [TRT] --------------- Timing Runner: model.9.cv1.conv.weight + QuantizeLinear_421_quantize_scale_node + Conv_423 + PWN(Sigmoid_425, Mul_426) (FusedConvActConvolution)
[11/17/2021-17:32:19] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:19] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:21] [V] [TRT] --------------- Timing Runner: model.9.cv1.conv.weight + QuantizeLinear_421_quantize_scale_node + Conv_423 + PWN(Sigmoid_425, Mul_426) (CaskConvolution)
[11/17/2021-17:32:21] [V] [TRT] model.9.cv1.conv.weight + QuantizeLinear_421_quantize_scale_node + Conv_423 + PWN(Sigmoid_425, Mul_426) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_interior_linkable_nn_v1 Tactic: 1846429674186638572
[11/17/2021-17:32:21] [V] [TRT] Tactic: 1846429674186638572 Time: 0.078872
[11/17/2021-17:32:21] [V] [TRT] model.9.cv1.conv.weight + QuantizeLinear_421_quantize_scale_node + Conv_423 + PWN(Sigmoid_425, Mul_426) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_linkable_nn_v1 Tactic: 2062167331723126804
[11/17/2021-17:32:21] [V] [TRT] Tactic: 2062167331723126804 Time: 0.059524
[11/17/2021-17:32:21] [V] [TRT] model.9.cv1.conv.weight + QuantizeLinear_421_quantize_scale_node + Conv_423 + PWN(Sigmoid_425, Mul_426) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_interior_linkable_nn_v1 Tactic: 2163705974276058675
[11/17/2021-17:32:21] [V] [TRT] Tactic: 2163705974276058675 Time: 0.05188
[11/17/2021-17:32:21] [V] [TRT] model.9.cv1.conv.weight + QuantizeLinear_421_quantize_scale_node + Conv_423 + PWN(Sigmoid_425, Mul_426) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_linkable_nn_v1 _v1 Tactic: -933148721750444463
[11/17/2021-17:32:45] [V] [TRT] Tactic: -933148721750444463 Time: 0.04464
[11/17/2021-17:32:45] [V] [TRT] Fastest Tactic: -933148721750444463 Time: 0.04464
[11/17/2021-17:32:45] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -933148721750444463
[11/17/2021-17:32:45] [V] [TRT] *************** Autotuning Reformat:Int8(204800,6400:4,80,1) -> Int8(25600,6400:32,80,1) ***************
[11/17/2021-17:32:45] [V] [TRT] *************** Autotuning Reformat:Int8(25600,6400:32,80,1) -> Int8(204800,6400:4,80,1) ***************
[11/17/2021-17:32:45] [V] [TRT] *************** Autotuning Reformat:Int8(204800,6400:4,80,1) -> Int8(25600,6400:32,80,1) ***************
[11/17/2021-17:32:45] [V] [TRT] *************** Autotuning Reformat:Int8(25600,6400:32,80,1) -> Int8(204800,6400:4,80,1) ***************
[11/17/2021-17:32:45] [V] [TRT] *************** Autotuning format combination: Int8(204800,6400:4,80,1) -> Int8(204800,6400:4,80,1) ***************
[11/17/2021-17:32:45] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:45] [V] [TRT] --------------- Timing Runner: model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) (CudaDepthwiseConvolution)
[11/17/2021-17:32:45] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:45] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:45] [V] [TRT] --------------- Timing Runner: model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) (FusedConvActConvolution)
[11/17/2021-17:32:45] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:45] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:47] [V] [TRT] --------------- Timing Runner: model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) (CaskConvolution)
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_interior_linkable_nn_v1 Tactic: 1846429674186638572
[11/17/2021-17:32:47] [V] [TRT] Tactic: 1846429674186638572 Time: 0.057352
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_linkable_nn_v1 Tactic: 2062167331723126804
[11/17/2021-17:32:47] [V] [TRT] Tactic: 2062167331723126804 Time: 0.0514
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_interior_linkable_nn_v1 Tactic: 2163705974276058675
[11/17/2021-17:32:47] [V] [TRT] Tactic: 2163705974276058675 Time: 0.048452
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_linkable_nn_v1 Tactic: 2953140420734779378
[11/17/2021-17:32:47] [V] [TRT] Tactic: 2953140420734779378 Time: 0.051844
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_interior_linkable_nn_v1 Tactic: 3350188008382892113
[11/17/2021-17:32:47] [V] [TRT] Tactic: 3350188008382892113 Time: 0.050464
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_linkable_nn_v1 Tactic: 5548767105407315374
[11/17/2021-17:32:47] [V] [TRT] Tactic: 5548767105407315374 Time: 0.049372
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_linkable_nn_v1 Tactic: 6754109235568247246
[11/17/2021-17:32:47] [V] [TRT] Tactic: 6754109235568247246 Time: 0.0493
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_interior_linkable_nn_v1 Tactic: -5021832056059729735
[11/17/2021-17:32:47] [V] [TRT] Tactic: -5021832056059729735 Time: 0.049064
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_linkable_nn_v1 Tactic: -4401433188029805615
[11/17/2021-17:32:47] [V] [TRT] Tactic: -4401433188029805615 Time: 0.04918
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_linkable_nn_v1 Tactic: -4101136722697375047
[11/17/2021-17:32:47] [V] [TRT] Tactic: -4101136722697375047 Time: 0.057648
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_linkable_nn_v1 Tactic: -1425547626671279159
[11/17/2021-17:32:47] [V] [TRT] Tactic: -1425547626671279159 Time: 0.05758
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_linkable_nn_v1 Tactic: -297553645873040436
[11/17/2021-17:32:47] [V] [TRT] Tactic: -297553645873040436 Time: 0.04976
[11/17/2021-17:32:47] [V] [TRT] Fastest Tactic: 2163705974276058675 Time: 0.048452
[11/17/2021-17:32:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 2163705974276058675
[11/17/2021-17:32:47] [V] [TRT] *************** Autotuning format combination: Int8(204800,6400:4,80,1) -> Int8(25600,6400:32,80,1) ***************
[11/17/2021-17:32:47] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:47] [V] [TRT] --------------- Timing Runner: model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) (CaskConvolution)
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_linkable_nn_v1 Tactic: 1177313720661951525
[11/17/2021-17:32:47] [V] [TRT] Tactic: 1177313720661951525 Time: 0.051692
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_interior_c32_linkable_nn_v1 Tactic: 2757746194421548129
[11/17/2021-17:32:47] [V] [TRT] Tactic: 2757746194421548129 Time: 0.049872
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_linkable_nn_v1 Tactic: 3314292196591262353
[11/17/2021-17:32:47] [V] [TRT] Tactic: 3314292196591262353 Time: 0.058028
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_linkable_nn_v1 Tactic: 3919188185061627568
[11/17/2021-17:32:47] [V] [TRT] Tactic: 3919188185061627568 Time: 0.048512
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_interior_c32_linkable_nn_v1 Tactic: 5394696047278823029
[11/17/2021-17:32:47] [V] [TRT] Tactic: 5394696047278823029 Time: 0.048652
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_linkable_nn_v1 Tactic: 6725816305394716478
[11/17/2021-17:32:47] [V] [TRT] Tactic: 6725816305394716478 Time: 0.050908
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_interior_c32_linkable_nn_v1 Tactic: 8092104281368710794
[11/17/2021-17:32:47] [V] [TRT] Tactic: 8092104281368710794 Time: 0.057692
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_linkable_nn_v1 Tactic: -6799106134035096080
[11/17/2021-17:32:47] [V] [TRT] Tactic: -6799106134035096080 Time: 0.049164
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_linkable_nn_v1 Tactic: -3560374842932576724
[11/17/2021-17:32:47] [V] [TRT] Tactic: -3560374842932576724 Time: 0.049276
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_interior_c32_linkable_nn_v1 Tactic: -2315590536674553176
[11/17/2021-17:32:47] [V] [TRT] Tactic: -2315590536674553176 Time: 0.047956
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_linkable_nn_v1 Tactic: -1610552211426761663
[11/17/2021-17:32:47] [V] [TRT] Tactic: -1610552211426761663 Time: 0.057596
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_linkable_nn_v1 Tactic: -248137846452159671
[11/17/2021-17:32:47] [V] [TRT] Tactic: -248137846452159671 Time: 0.048848
[11/17/2021-17:32:47] [V] [TRT] Fastest Tactic: -2315590536674553176 Time: 0.047956
[11/17/2021-17:32:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -2315590536674553176
[11/17/2021-17:32:47] [V] [TRT] *************** Autotuning format combination: Int8(25600,6400:32,80,1) -> Int8(25600,6400:32,80,1) ***************
[11/17/2021-17:32:47] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:47] [V] [TRT] --------------- Timing Runner: model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) (CudaGroupConvolution)
[11/17/2021-17:32:47] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:47] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:47] [V] [TRT] --------------- Timing Runner: model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) (CudaDepthwiseConvolution)
[11/17/2021-17:32:47] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:47] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:47] [V] [TRT] --------------- Timing Runner: model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) (FusedConvActConvolution)
[11/17/2021-17:32:47] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:47] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:47] [V] [TRT] --------------- Timing Runner: model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) (CaskConvolution)
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_small_linkable_nt_v1 Tactic: 2512930805881575648
[11/17/2021-17:32:47] [V] [TRT] Tactic: 2512930805881575648 Time: 0.052312
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_256x128_ldg16_relu_interior_linkable_nt_v1 Tactic: 2871659474067289465
[11/17/2021-17:32:47] [V] [TRT] Tactic: 2871659474067289465 Time: 0.03588
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_128x128_ldg16_relu_interior_linkable_nt_v1 Tactic: 4069546629816122384
[11/17/2021-17:32:47] [V] [TRT] Tactic: 4069546629816122384 Time: 0.036036
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_linkable_nt_v1 Tactic: 6816729252632143345
[11/17/2021-17:32:47] [V] [TRT] Tactic: 6816729252632143345 Time: 0.03782
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_linkable_nt_v1 Tactic: 7132003274270369280
[11/17/2021-17:32:47] [V] [TRT] Tactic: 7132003274270369280 Time: 0.0363
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_interior_linkable_nt_v1 Tactic: 9106185976251181685
[11/17/2021-17:32:47] [V] [TRT] Tactic: 9106185976251181685 Time: 0.052248
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_linkable_nt_v1 Tactic: -6726468868693156247
[11/17/2021-17:32:47] [V] [TRT] Tactic: -6726468868693156247 Time: 0.0531
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_linkable_nt_v1 Tactic: -5751220628383722328
[11/17/2021-17:32:47] [V] [TRT] Tactic: -5751220628383722328 Time: 0.036352
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_128x128_ldg16_relu_small_linkable_nt_v1 Tactic: -3932383927815593719
[11/17/2021-17:32:47] [V] [TRT] Tactic: -3932383927815593719 Time: 0.03608
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_interior_linkable_nt_v1 Tactic: -1700814155385235755
[11/17/2021-17:32:47] [V] [TRT] Tactic: -1700814155385235755 Time: 0.03798
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_256x128_ldg16_relu_small_linkable_nt_v1 Tactic: -1611809172220379050
[11/17/2021-17:32:47] [V] [TRT] Tactic: -1611809172220379050 Time: 0.03682
[11/17/2021-17:32:47] [V] [TRT] model.17.cv3.conv.weight + QuantizeLinear_600_quantize_scale_node + Conv_602 + PWN(Sigmoid_604, Mul_605) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_linkable_nt_v1 Tactic: -933148721750444463
[11/17/2021-17:32:47] [V] [TRT] Tactic: -933148721750444463 Time: 0.037756
[11/17/2021-17:32:47] [V] [TRT] Fastest Tactic: 2871659474067289465 Time: 0.03588
[11/17/2021-17:32:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 2871659474067289465
[11/17/2021-17:32:47] [V] [TRT] *************** Autotuning Reformat:Int8(204800,6400:4,80,1) -> Int8(25600,6400:32,80,1) ***************
[11/17/2021-17:32:47] [V] [TRT] *************** Autotuning Reformat:Int8(25600,6400:32,80,1) -> Int8(204800,6400:4,80,1) ***************
[11/17/2021-17:32:47] [V] [TRT] *************** Autotuning Reformat:Int8(204800,6400:4,80,1) -> Int8(25600,6400:32,80,1) ***************
[11/17/2021-17:32:47] [V] [TRT] *************** Autotuning Reformat:Int8(25600,6400:32,80,1) -> Int8(204800,6400:4,80,1) ***************
[11/17/2021-17:32:47] [V] [TRT] *************** Autotuning format combination: Int8(204800,6400:4,80,1) -> Int8(102400,1600:4,40,1) ***************
[11/17/2021-17:32:47] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:47] [V] [TRT] --------------- Timing Runner: model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) (CudaDepthwiseConvolution)
[11/17/2021-17:32:47] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:47] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:47] [V] [TRT] --------------- Timing Runner: model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) (FusedConvActConvolution)
[11/17/2021-17:32:47] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:47] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) (CaskConvolution)
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_linkable_nn_v1 Tactic: 2062167331723126804
[11/17/2021-17:32:50] [V] [TRT] Tactic: 2062167331723126804 Time: 0.167096
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_linkable_nn_v1 Tactic: 2953140420734779378
[11/17/2021-17:32:50] [V] [TRT] Tactic: 2953140420734779378 Time: 0.17544
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_linkable_nn_v1 Tactic: 5548767105407315374
[11/17/2021-17:32:50] [V] [TRT] Tactic: 5548767105407315374 Time: 0.10276
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_linkable_nn_v1 Tactic: 6754109235568247246
[11/17/2021-17:32:50] [V] [TRT] Tactic: 6754109235568247246 Time: 0.105352
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_linkable_nn_v1 Tactic: 7130890136796282277
[11/17/2021-17:32:50] [V] [TRT] Tactic: 7130890136796282277 Time: 0.099456
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_linkable_nn_v1 Tactic: -7038853276937035798
[11/17/2021-17:32:50] [V] [TRT] Tactic: -7038853276937035798 Time: 0.151892
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_linkable_nn_v1 Tactic: -4401433188029805615
[11/17/2021-17:32:50] [V] [TRT] Tactic: -4401433188029805615 Time: 0.09644
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_linkable_nn_v1 Tactic: -4101136722697375047
[11/17/2021-17:32:50] [V] [TRT] Tactic: -4101136722697375047 Time: 0.152192
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_linkable_nn_v1 Tactic: -1425547626671279159
[11/17/2021-17:32:50] [V] [TRT] Tactic: -1425547626671279159 Time: 0.149576
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_linkable_nn_v1 Tactic: -297553645873040436
[11/17/2021-17:32:50] [V] [TRT] Tactic: -297553645873040436 Time: 0.112932
[11/17/2021-17:32:50] [V] [TRT] Fastest Tactic: -4401433188029805615 Time: 0.09644
[11/17/2021-17:32:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -4401433188029805615
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning format combination: Int8(204800,6400:4,80,1) -> Int8(12800,1600:32,40,1) ***************
[11/17/2021-17:32:50] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) (CaskConvolution)
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_linkable_nn_v1 Tactic: 1177313720661951525
[11/17/2021-17:32:50] [V] [TRT] Tactic: 1177313720661951525 Time: 0.174152
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_linkable_nn_v1 Tactic: 3314292196591262353
[11/17/2021-17:32:50] [V] [TRT] Tactic: 3314292196591262353 Time: 0.149392
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_linkable_nn_v1 Tactic: 3919188185061627568
[11/17/2021-17:32:50] [V] [TRT] Tactic: 3919188185061627568 Time: 0.09638
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_c32_linkable_nn_v1 Tactic: 6168359976651975969
[11/17/2021-17:32:50] [V] [TRT] Tactic: 6168359976651975969 Time: 0.151792
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_linkable_nn_v1 Tactic: 6725816305394716478
[11/17/2021-17:32:50] [V] [TRT] Tactic: 6725816305394716478 Time: 0.166332
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_linkable_nn_v1 Tactic: -6799106134035096080
[11/17/2021-17:32:50] [V] [TRT] Tactic: -6799106134035096080 Time: 0.102728
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_c32_linkable_nn_v1 Tactic: -5339554066242055154
[11/17/2021-17:32:50] [V] [TRT] Tactic: -5339554066242055154 Time: 0.099468
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_linkable_nn_v1 Tactic: -3560374842932576724
[11/17/2021-17:32:50] [V] [TRT] Tactic: -3560374842932576724 Time: 0.111916
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_linkable_nn_v1 Tactic: -1610552211426761663
[11/17/2021-17:32:50] [V] [TRT] Tactic: -1610552211426761663 Time: 0.15232
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_linkable_nn_v1 Tactic: -248137846452159671
[11/17/2021-17:32:50] [V] [TRT] Tactic: -248137846452159671 Time: 0.105552
[11/17/2021-17:32:50] [V] [TRT] Fastest Tactic: 3919188185061627568 Time: 0.09638
[11/17/2021-17:32:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 3919188185061627568
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning format combination: Int8(25600,6400:32,80,1) -> Int8(12800,1600:32,40,1) ***************
[11/17/2021-17:32:50] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) (CudaGroupConvolution)
[11/17/2021-17:32:50] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:50] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) (CudaDepthwiseConvolution)
[11/17/2021-17:32:50] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:50] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) (FusedConvActConvolution)
[11/17/2021-17:32:50] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:50] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) (CaskConvolution)
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_large_linkable_nt_v1 Tactic: 404322662569106468
[11/17/2021-17:32:50] [V] [TRT] Tactic: 404322662569106468 Time: 0.069432
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_small_linkable_nt_v1 Tactic: 2512930805881575648
[11/17/2021-17:32:50] [V] [TRT] Tactic: 2512930805881575648 Time: 0.065796
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_linkable_nt_v1 Tactic: 6816729252632143345
[11/17/2021-17:32:50] [V] [TRT] Tactic: 6816729252632143345 Time: 0.064724
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_linkable_nt_v1 Tactic: 7132003274270369280
[11/17/2021-17:32:50] [V] [TRT] Tactic: 7132003274270369280 Time: 0.094204
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_linkable_nt_v1 Tactic: -6726468868693156247
[11/17/2021-17:32:50] [V] [TRT] Tactic: -6726468868693156247 Time: 0.067796
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_linkable_nt_v1 Tactic: -5751220628383722328
[11/17/2021-17:32:50] [V] [TRT] Tactic: -5751220628383722328 Time: 0.061548
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_128x128_ldg16_relu_small_linkable_nt_v1 Tactic: -3932383927815593719
[11/17/2021-17:32:50] [V] [TRT] Tactic: -3932383927815593719 Time: 0.059616
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_256x128_ldg16_relu_large_linkable_nt_v1 Tactic: -3539129975763254126
[11/17/2021-17:32:50] [V] [TRT] Tactic: -3539129975763254126 Time: 0.094272
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_large_linkable_nt_v1 Tactic: -3148295143731023211
[11/17/2021-17:32:50] [V] [TRT] Tactic: -3148295143731023211 Time: 0.066052
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_256x128_ldg16_relu_small_linkable_nt_v1 Tactic: -1611809172220379050
[11/17/2021-17:32:50] [V] [TRT] Tactic: -1611809172220379050 Time: 0.09414
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_128x128_ldg16_relu_large_linkable_nt_v1 Tactic: -1283924227207993907
[11/17/2021-17:32:50] [V] [TRT] Tactic: -1283924227207993907 Time: 0.064624
[11/17/2021-17:32:50] [V] [TRT] model.18.conv.weight + QuantizeLinear_613_quantize_scale_node + Conv_615 + PWN(Sigmoid_617, Mul_618) Set Tactic Name: turing_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_linkable_nt_v1 Tactic: -933148721750444463
[11/17/2021-17:32:50] [V] [TRT] Tactic: -933148721750444463 Time: 0.06284
[11/17/2021-17:32:50] [V] [TRT] Fastest Tactic: -3932383927815593719 Time: 0.059616
[11/17/2021-17:32:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -3932383927815593719
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(102400,1600:4,40,1) -> Int8(409600,1600,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(102400,1600:4,40,1) -> Int8(12800,1600:32,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(12800,1600:32,40,1) -> Int8(409600,1600,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(12800,1600:32,40,1) -> Int8(102400,1600:4,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(204800,6400:4,80,1) -> Int8(25600,6400:32,80,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(25600,6400:32,80,1) -> Int8(204800,6400:4,80,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning format combination: Int8(204800,6400:4,80,1) -> Float(1632000,6400,80,1) ***************
[11/17/2021-17:32:50] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 (CudaDepthwiseConvolution)
[11/17/2021-17:32:50] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:50] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 (CaskConvolution)
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 892787096507693407
[11/17/2021-17:32:50] [V] [TRT] Tactic: 892787096507693407 Time: 0.090652
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 1204440019753223942
[11/17/2021-17:32:50] [V] [TRT] Tactic: 1204440019753223942 Time: 0.093808
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 1659301557717208403
[11/17/2021-17:32:50] [V] [TRT] Tactic: 1659301557717208403 Time: 0.094004
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 2057291331119027912
[11/17/2021-17:32:50] [V] [TRT] Tactic: 2057291331119027912 Time: 0.078568
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: 3275977259705528576
[11/17/2021-17:32:50] [V] [TRT] Tactic: 3275977259705528576 Time: 0.09092
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 5623454780463195174
[11/17/2021-17:32:50] [V] [TRT] Tactic: 5623454780463195174 Time: 0.095852
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -9204333525109552344
[11/17/2021-17:32:50] [V] [TRT] Tactic: -9204333525109552344 Time: 0.077616
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x64_relu_interior_nn_v1 Tactic: -7924103240988931433
[11/17/2021-17:32:50] [V] [TRT] Tactic: -7924103240988931433 Time: 0.077188
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x32_relu_xregs_interior_nn_v1 Tactic: -7489650117016530013
[11/17/2021-17:32:50] [V] [TRT] Tactic: -7489650117016530013 Time: 0.090972
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -4973811344878172338
[11/17/2021-17:32:50] [V] [TRT] Tactic: -4973811344878172338 Time: 0.090132
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x32_relu_interior_nn_v1 Tactic: -3908975881807046106
[11/17/2021-17:32:50] [V] [TRT] Tactic: -3908975881807046106 Time: 0.093164
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_icudnn_int8x4_128x128_relu_interior_nn_v1 Tactic: -1765942417666394360
[11/17/2021-17:32:50] [V] [TRT] Tactic: -1765942417666394360 Time: 0.088856
[11/17/2021-17:32:50] [V] [TRT] Fastest Tactic: -7924103240988931433 Time: 0.077188
[11/17/2021-17:32:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7924103240988931433
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning format combination: Int8(25600,6400:32,80,1) -> Float(51200,6400:32,80,1) ***************
[11/17/2021-17:32:50] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 (CaskConvolution)
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 394365917225754726
[11/17/2021-17:32:50] [V] [TRT] Tactic: 394365917225754726 Time: 0.078476
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: 924563784895318224
[11/17/2021-17:32:50] [V] [TRT] Tactic: 924563784895318224 Time: 0.081416
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_256x64_ldg16_relu_interior_nt_v1 Tactic: 4531028070024747144
[11/17/2021-17:32:50] [V] [TRT] Tactic: 4531028070024747144 Time: 0.084376
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_128x128_ldg16_relu_interior_nt_v1 Tactic: 4697540470896098800
[11/17/2021-17:32:50] [V] [TRT] Tactic: 4697540470896098800 Time: 0.07876
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: 6096719469361499298
[11/17/2021-17:32:50] [V] [TRT] Tactic: 6096719469361499298 Time: 0.081376
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_256x128_ldg16_relu_interior_nt_v1 Tactic: 7469355608320515097
[11/17/2021-17:32:50] [V] [TRT] Tactic: 7469355608320515097 Time: 0.113696
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 7785217228143857868
[11/17/2021-17:32:50] [V] [TRT] Tactic: 7785217228143857868 Time: 0.113316
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 8315790488934712458
[11/17/2021-17:32:50] [V] [TRT] Tactic: 8315790488934712458 Time: 0.07778
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_interior_nt_v1 Tactic: 9221575372280690678
[11/17/2021-17:32:50] [V] [TRT] Tactic: 9221575372280690678 Time: 0.082128
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: -8462194455331556195
[11/17/2021-17:32:50] [V] [TRT] Tactic: -8462194455331556195 Time: 0.077908
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: -7638944668269666085
[11/17/2021-17:32:50] [V] [TRT] Tactic: -7638944668269666085 Time: 0.113384
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -7185527339793611699
[11/17/2021-17:32:50] [V] [TRT] Tactic: -7185527339793611699 Time: 0.081472
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -5979101256828290173
[11/17/2021-17:32:50] [V] [TRT] Tactic: -5979101256828290173 Time: 0.083704
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -5250790590226149674
[11/17/2021-17:32:50] [V] [TRT] Tactic: -5250790590226149674 Time: 0.113952
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -4831366370915083630
[11/17/2021-17:32:50] [V] [TRT] Tactic: -4831366370915083630 Time: 0.075792
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -4563432698383308679
[11/17/2021-17:32:50] [V] [TRT] Tactic: -4563432698383308679 Time: 0.114848
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: -3936136542475827126
[11/17/2021-17:32:50] [V] [TRT] Tactic: -3936136542475827126 Time: 0.080264
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -3784829056659735491
[11/17/2021-17:32:50] [V] [TRT] Tactic: -3784829056659735491 Time: 0.075552
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -2697681528489059028
[11/17/2021-17:32:50] [V] [TRT] Tactic: -2697681528489059028 Time: 0.076568
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_128x128_ldg16_relu_interior_nt_v1 Tactic: -2621193268472024213
[11/17/2021-17:32:50] [V] [TRT] Tactic: -2621193268472024213 Time: 0.078472
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: turing_fp32_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_interior_nt_v1 Tactic: -2120707675073643088
[11/17/2021-17:32:50] [V] [TRT] Tactic: -2120707675073643088 Time: 0.082184
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -733152064595858464
[11/17/2021-17:32:50] [V] [TRT] Tactic: -733152064595858464 Time: 0.083192
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_interior_nt_v1 Tactic: -706197824303187656
[11/17/2021-17:32:50] [V] [TRT] Tactic: -706197824303187656 Time: 0.08146
[11/17/2021-17:32:50] [V] [TRT] model.24.m.0.weight + QuantizeLinear_773_quantize_scale_node + Conv_775 Set Tactic Name: volta_fp32_i8816cudnn_int8_256x128_ldg16_relu_interior_nt_v1 Tactic: -214244313010793854
[11/17/2021-17:32:50] [V] [TRT] Tactic: -214244313010793854 Time: 0.113212
[11/17/2021-17:32:50] [V] [TRT] Fastest Tactic: -3784829056659735491 Time: 0.075552
[11/17/2021-17:32:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -3784829056659735491
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Float(6400,1600:32,40,1) -> Float(204800,1600,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning format combination: Float(204800,1600,40,1) -> Int8(409600,1600,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: QuantizeLinear_622_quantize_scale_node_clone_1 (Scale)
[11/17/2021-17:32:50] [V] [TRT] Setting a default quantization params because quantization data is missing for QuantizeLinear_622_quantize_scale_node_clone_1
[11/17/2021-17:32:50] [V] [TRT] Tactic: 0 Time: 0.01194
[11/17/2021-17:32:50] [V] [TRT] Fastest Tactic: 0 Time: 0.01194
[11/17/2021-17:32:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning format combination: Float(204800,1600,40,1) -> Int8(102400,1600:4,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: QuantizeLinear_622_quantize_scale_node_clone_1 (Scale)
[11/17/2021-17:32:50] [V] [TRT] Setting a default quantization params because quantization data is missing for QuantizeLinear_622_quantize_scale_node_clone_1
[11/17/2021-17:32:50] [V] [TRT] Tactic: 0 Time: 0.014488
[11/17/2021-17:32:50] [V] [TRT] Fastest Tactic: 0 Time: 0.014488
[11/17/2021-17:32:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning format combination: Float(204800,1600,40,1) -> Int8(12800,1600:32,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: QuantizeLinear_622_quantize_scale_node_clone_1 (Scale)
[11/17/2021-17:32:50] [V] [TRT] Setting a default quantization params because quantization data is missing for QuantizeLinear_622_quantize_scale_node_clone_1
[11/17/2021-17:32:50] [V] [TRT] Tactic: 0 Time: 0.014436
[11/17/2021-17:32:50] [V] [TRT] Fastest Tactic: 0 Time: 0.014436
[11/17/2021-17:32:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(409600,1600,40,1) -> Int8(102400,1600:4,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(409600,1600,40,1) -> Int8(12800,1600:32,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(102400,1600:4,40,1) -> Int8(409600,1600,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(102400,1600:4,40,1) -> Int8(12800,1600:32,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(12800,1600:32,40,1) -> Int8(409600,1600,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(12800,1600:32,40,1) -> Int8(102400,1600:4,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(409600,1600,40,1) -> Int8(102400,1600:4,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(409600,1600,40,1) -> Int8(12800,1600:32,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(102400,1600:4,40,1) -> Int8(12800,1600:32,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning Reformat:Int8(12800,1600:32,40,1) -> Int8(102400,1600:4,40,1) ***************
[11/17/2021-17:32:50] [V] [TRT] *************** Autotuning format combination: Int8(102400,1600:4,40,1) -> Int8(51200,1600:4,40,1) ***************
[11/17/2021-17:32:50] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) (CudaDepthwiseConvolution)
[11/17/2021-17:32:50] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:50] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:50] [V] [TRT] --------------- Timing Runner: model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) (FusedConvActConvolution)
[11/17/2021-17:32:50] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping
[11/17/2021-17:32:50] [W] [TRT] Some weights are outside of int8_t range and will be clipped to int8_t range.
[11/17/2021-17:32:53] [V] [TRT] --------------- Timing Runner: model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) (CaskConvolution)
[11/17/2021-17:32:53] [V] [TRT] model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_interior_linkable_nn_v1 Tactic: 1846429674186638572
[11/17/2021-17:32:53] [V] [TRT] Tactic: 1846429674186638572 Time: 0.05034
[11/17/2021-17:32:53] [V] [TRT] model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_linkable_nn_v1 Tactic: 2062167331723126804
[11/17/2021-17:32:53] [V] [TRT] Tactic: 2062167331723126804 Time: 0.0534
[11/17/2021-17:32:53] [V] [TRT] model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_interior_linkable_nn_v1 Tactic: 2163705974276058675
[11/17/2021-17:32:53] [V] [TRT] Tactic: 2163705974276058675 Time: 0.036024
[11/17/2021-17:32:53] [V] [TRT] model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_linkable_nn_v1 Tactic: 2953140420734779378
[11/17/2021-17:32:53] [V] [TRT] Tactic: 2953140420734779378 Time: 0.054904
[11/17/2021-17:32:53] [V] [TRT] model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_interior_linkable_nn_v1 Tactic: 3350188008382892113
[11/17/2021-17:32:53] [V] [TRT] Tactic: 3350188008382892113 Time: 0.051344
[11/17/2021-17:32:53] [V] [TRT] model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_linkable_nn_v1 Tactic: 5548767105407315374
[11/17/2021-17:32:53] [V] [TRT] Tactic: 5548767105407315374 Time: 0.038072
[11/17/2021-17:32:53] [V] [TRT] model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_linkable_nn_v1 Tactic: 6754109235568247246
[11/17/2021-17:32:53] [V] [TRT] Tactic: 6754109235568247246 Time: 0.038956
[11/17/2021-17:32:53] [V] [TRT] model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_interior_linkable_nn_v1 Tactic: -5021832056059729735
[11/17/2021-17:32:53] [V] [TRT] Tactic: -5021832056059729735 Time: 0.038024
[11/17/2021-17:32:53] [V] [TRT] model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_linkable_nn_v1 Tactic: -4401433188029805615
[11/17/2021-17:32:53] [V] [TRT] Tactic: -4401433188029805615 Time: 0.03712
[11/17/2021-17:32:53] [V] [TRT] model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_linkable_nn_v1 Tactic: -4101136722697375047
[11/17/2021-17:32:53] [V] [TRT] Tactic: -4101136722697375047 Time: 0.051336
[11/17/2021-17:32:53] [V] [TRT] model.20.cv1.conv.weight + QuantizeLinear_627_quantize_scale_node + Conv_629 + PWN(Sigmoid_631, Mul_632) Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_linkable_nn_v1 Tactic: -1425547626671279159
.....
[11/17/2021-17:33:05] [I]
[11/17/2021-17:33:05] [I] === Performance summary ===
[11/17/2021-17:33:05] [I] Throughput: 547.991 qps
[11/17/2021-17:33:05] [I] Latency: min = 2.12048 ms, max = 4.74612 ms, mean = 2.25926 ms, median = 2.17554 ms, percentile(99%) = 3.94409 ms
[11/17/2021-17:33:05] [I] End-to-End Host Latency: min = 3.13608 ms, max = 7.87646 ms, mean = 3.4508 ms, median = 3.29333 ms, percentile(99%) = 5.36621 ms
[11/17/2021-17:33:05] [I] Enqueue Time: min = 0.521301 ms, max = 1.73022 ms, mean = 0.781702 ms, median = 0.755066 ms, percentile(99%) = 1.16772 ms
[11/17/2021-17:33:05] [I] H2D Latency: min = 0.419434 ms, max = 0.467896 ms, mean = 0.43049 ms, median = 0.428223 ms, percentile(99%) = 0.447754 ms
[11/17/2021-17:33:05] [I] GPU Compute Time: min = 1.68793 ms, max = 4.297 ms, mean = 1.82153 ms, median = 1.73657 ms, percentile(99%) = 3.49805 ms
[11/17/2021-17:33:05] [I] D2H Latency: min = 0.00439453 ms, max = 0.0280762 ms, mean = 0.00724155 ms, median = 0.00695801 ms, percentile(99%) = 0.0107422 ms
[11/17/2021-17:33:05] [I] Total Host Walltime: 3.00735 s
[11/17/2021-17:33:05] [I] Total GPU Compute Time: 3.00189 s
[11/17/2021-17:33:05] [I] Explanations of the performance metrics are printed in the verbose logs.
[11/17/2021-17:33:05] [V]
[11/17/2021-17:33:05] [V] === Explanations of the performance metrics ===
[11/17/2021-17:33:05] [V] Total Host Walltime: the host walltime from when the first query (after warmups) is enqueued to when the last query is completed.
[11/17/2021-17:33:05] [V] GPU Compute Time: the GPU latency to execute the kernels for a query.
[11/17/2021-17:33:05] [V] Total GPU Compute Time: the summation of the GPU Compute Time of all the queries. If this is significantly shorter than Total Host Walltime, the GPU may be under-utilized because of host-side overheads or data transfers.
[11/17/2021-17:33:05] [V] Throughput: the observed throughput computed by dividing the number of queries by the Total Host Walltime. If this is significantly lower than the reciprocal of GPU Compute Time, the GPU may be under-utilized because of host-side overheads or data transfers.
[11/17/2021-17:33:05] [V] Enqueue Time: the host latency to enqueue a query. If this is longer than GPU Compute Time, the GPU may be under-utilized.
[11/17/2021-17:33:05] [V] H2D Latency: the latency for host-to-device data transfers for input tensors of a single query.
[11/17/2021-17:33:05] [V] D2H Latency: the latency for device-to-host data transfers for output tensors of a single query.
[11/17/2021-17:33:05] [V] Latency: the summation of H2D Latency, GPU Compute Time, and D2H Latency. This is the latency to infer a single query.
[11/17/2021-17:33:05] [V] End-to-End Host Latency: the duration from when the H2D of a query is called to when the D2H of the same query is completed, which includes the latency to wait for the completion of the previous query. This is the latency of a query if multiple queries are enqueued consecutively.
[11/17/2021-17:33:05] [I]
&&&& PASSED TensorRT.trtexec [TensorRT v8003] # ./trtexec --onnx=/root/yolov5s.onnx --workspace=10240 --int8 --saveEngine=/root/yolov5s-6.0-qat-int8-coco.engine --plugins=/root/workspace/plugins/YoloLayer_TRT_v6.0/build/libyolo.so --verbose
[11/17/2021-17:33:05] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1013, GPU 1638 (MiB)
root@d741691190a8:/workspace/tensorrt/bin#

Hi,

Could you please share plugin or minimal issue repro model to try from our end.

Thank you.