&&&& RUNNING TensorRT.trtexec [TensorRT v8001] # /usr/src/tensorrt/bin/trtexec --onnx=resnet.onnx --useCudaGraph --threads --noDataTransfers --verbose --int8 [10/04/2021-21:26:18] [I] === Model Options === [10/04/2021-21:26:18] [I] Format: ONNX [10/04/2021-21:26:18] [I] Model: resnet.onnx [10/04/2021-21:26:18] [I] Output: [10/04/2021-21:26:18] [I] === Build Options === [10/04/2021-21:26:18] [I] Max batch: explicit [10/04/2021-21:26:18] [I] Workspace: 16 MiB [10/04/2021-21:26:18] [I] minTiming: 1 [10/04/2021-21:26:18] [I] avgTiming: 8 [10/04/2021-21:26:18] [I] Precision: FP32+INT8 [10/04/2021-21:26:18] [I] Calibration: Dynamic [10/04/2021-21:26:18] [I] Refit: Disabled [10/04/2021-21:26:18] [I] Sparsity: Disabled [10/04/2021-21:26:18] [I] Safe mode: Disabled [10/04/2021-21:26:18] [I] Restricted mode: Disabled [10/04/2021-21:26:18] [I] Save engine: [10/04/2021-21:26:18] [I] Load engine: [10/04/2021-21:26:18] [I] NVTX verbosity: 0 [10/04/2021-21:26:18] [I] Tactic sources: Using default tactic sources [10/04/2021-21:26:18] [I] timingCacheMode: local [10/04/2021-21:26:18] [I] timingCacheFile: [10/04/2021-21:26:18] [I] Input(s)s format: fp32:CHW [10/04/2021-21:26:18] [I] Output(s)s format: fp32:CHW [10/04/2021-21:26:18] [I] Input build shapes: model [10/04/2021-21:26:18] [I] Input calibration shapes: model [10/04/2021-21:26:18] [I] === System Options === [10/04/2021-21:26:18] [I] Device: 0 [10/04/2021-21:26:18] [I] DLACore: [10/04/2021-21:26:18] [I] Plugins: [10/04/2021-21:26:18] [I] === Inference Options === [10/04/2021-21:26:18] [I] Batch: Explicit [10/04/2021-21:26:18] [I] Input inference shapes: model [10/04/2021-21:26:18] [I] Iterations: 10 [10/04/2021-21:26:18] [I] Duration: 3s (+ 200ms warm up) [10/04/2021-21:26:18] [I] Sleep time: 0ms [10/04/2021-21:26:18] [I] Streams: 1 [10/04/2021-21:26:18] [I] ExposeDMA: Disabled [10/04/2021-21:26:18] [I] Data transfers: Disabled [10/04/2021-21:26:18] [I] Spin-wait: Disabled [10/04/2021-21:26:18] [I] Multithreading: Enabled [10/04/2021-21:26:18] [I] CUDA Graph: Enabled [10/04/2021-21:26:18] [I] Separate profiling: Disabled [10/04/2021-21:26:18] [I] Time Deserialize: Disabled [10/04/2021-21:26:18] [I] Time Refit: Disabled [10/04/2021-21:26:18] [I] Skip inference: Disabled [10/04/2021-21:26:18] [I] Inputs: [10/04/2021-21:26:18] [I] === Reporting Options === [10/04/2021-21:26:18] [I] Verbose: Enabled [10/04/2021-21:26:18] [I] Averages: 10 inferences [10/04/2021-21:26:18] [I] Percentile: 99 [10/04/2021-21:26:18] [I] Dump refittable layers:Disabled [10/04/2021-21:26:18] [I] Dump output: Disabled [10/04/2021-21:26:18] [I] Profile: Disabled [10/04/2021-21:26:18] [I] Export timing to JSON file: [10/04/2021-21:26:18] [I] Export output to JSON file: [10/04/2021-21:26:18] [I] Export profile to JSON file: [10/04/2021-21:26:18] [I] [10/04/2021-21:26:18] [I] === Device Information === [10/04/2021-21:26:18] [I] Selected Device: Xavier [10/04/2021-21:26:18] [I] Compute Capability: 7.2 [10/04/2021-21:26:18] [I] SMs: 8 [10/04/2021-21:26:18] [I] Compute Clock Rate: 1.377 GHz [10/04/2021-21:26:18] [I] Device Global Memory: 31928 MiB [10/04/2021-21:26:18] [I] Shared Memory per SM: 96 KiB [10/04/2021-21:26:18] [I] Memory Bus Width: 256 bits (ECC disabled) [10/04/2021-21:26:18] [I] Memory Clock Rate: 1.377 GHz [10/04/2021-21:26:18] [I] [10/04/2021-21:26:18] [I] TensorRT version: 8001 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::NMS_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::Reorg_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::Region_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::Clip_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::LReLU_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::Normalize_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::ScatterND version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::RPROI_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::CropAndResize version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::Proposal version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::Split version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1 [10/04/2021-21:26:18] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1 [10/04/2021-21:26:19] [I] [TRT] [MemUsageChange] Init CUDA: CPU +354, GPU +0, now: CPU 372, GPU 2607 (MiB) [10/04/2021-21:26:19] [I] Start parsing network model [10/04/2021-21:26:20] [I] [TRT] ---------------------------------------------------------------- [10/04/2021-21:26:20] [I] [TRT] Input filename: resnet.onnx [10/04/2021-21:26:20] [I] [TRT] ONNX IR version: 0.0.7 [10/04/2021-21:26:20] [I] [TRT] Opset version: 13 [10/04/2021-21:26:20] [I] [TRT] Producer name: tf2onnx [10/04/2021-21:26:20] [I] [TRT] Producer version: 1.10.0 [10/04/2021-21:26:20] [I] [TRT] Domain: [10/04/2021-21:26:20] [I] [TRT] Model version: 0 [10/04/2021-21:26:20] [I] [TRT] Doc string: [10/04/2021-21:26:20] [I] [TRT] ---------------------------------------------------------------- [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::GridAnchor_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::GridAnchorRect_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::NMS_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::Reorg_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::Region_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::Clip_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::LReLU_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::PriorBox_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::Normalize_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::ScatterND version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::RPROI_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::BatchedNMS_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::BatchedNMSDynamic_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::FlattenConcat_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::CropAndResize version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::DetectionLayer_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::EfficientNMS_ONNX_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::EfficientNMS_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::Proposal version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::ProposalLayer_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::PyramidROIAlign_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::ResizeNearest_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::Split version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::SpecialSlice_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 1 [10/04/2021-21:26:20] [V] [TRT] Adding network input: input_1 with dtype: float32, dimensions: (-1, 3, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Registering tensor: input_1 for ONNX tensor: input_1 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__998 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__994 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__990 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__986 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__978 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__974 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__970 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__966 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__958 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__954 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__950 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__942 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__938 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__934 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__930 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__926 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__918 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__914 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__910 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__902 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__898 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__894 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__890 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__886 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__878 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__874 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__870 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__866 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__858 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__854 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__850 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__846 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__838 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__834 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__830 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__826 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__822 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__818 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__810 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__806 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__802 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__794 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__790 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__786 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__782 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__774 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__770 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__766 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__762 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__758 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__750 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__746 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__742 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__738 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__730 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__726 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__722 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__714 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__710 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__706 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__702 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__694 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__690 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__686 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__682 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__674 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__670 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__666 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__662 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__654 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__650 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__646 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__642 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__634 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__630 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__626 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1186 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1182 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1178 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1174 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1166 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1162 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1158 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1154 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1146 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1142 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1138 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1130 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1126 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1122 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1118 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1114 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1106 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1102 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1098 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1094 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1086 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1082 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1078 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1074 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1066 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1062 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1058 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1054 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1046 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1042 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1038 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1034 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1026 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1022 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1018 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1010 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1006 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: quant_scale__1002 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: const_axes__2019 [10/04/2021-21:26:20] [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd/ReadVariableOp__111 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd/ReadVariableOp__99 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd/ReadVariableOp__89 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd/ReadVariableOp__77 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd/ReadVariableOp__617 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd/ReadVariableOp__607 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd/ReadVariableOp__595 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd/ReadVariableOp__585 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd/ReadVariableOp__573 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd/ReadVariableOp__563 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd/ReadVariableOp__551 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd/ReadVariableOp__67 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd/ReadVariableOp__541 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd/ReadVariableOp__529 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd/ReadVariableOp__519 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd/ReadVariableOp__507 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd/ReadVariableOp__497 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd/ReadVariableOp__485 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd/ReadVariableOp__475 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd/ReadVariableOp__463 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd/ReadVariableOp__453 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd/ReadVariableOp__423 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd/ReadVariableOp__55 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd/ReadVariableOp__441 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd/ReadVariableOp__431 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd/ReadVariableOp__413 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd/ReadVariableOp__403 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd/ReadVariableOp__391 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd/ReadVariableOp__381 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd/ReadVariableOp__369 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd/ReadVariableOp__359 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd/ReadVariableOp__347 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd/ReadVariableOp__337 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd/ReadVariableOp__45 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd/ReadVariableOp__325 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd/ReadVariableOp__315 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd/ReadVariableOp__303 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd/ReadVariableOp__293 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd/ReadVariableOp__281 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd/ReadVariableOp__271 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd/ReadVariableOp__259 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd/ReadVariableOp__249 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd/ReadVariableOp__219 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd/ReadVariableOp__237 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd/ReadVariableOp__33 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd/ReadVariableOp__227 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd/ReadVariableOp__209 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd/ReadVariableOp__199 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd/ReadVariableOp__187 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd/ReadVariableOp__177 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd/ReadVariableOp__165 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd/ReadVariableOp__155 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd/ReadVariableOp__143 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd/ReadVariableOp__133 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd/ReadVariableOp__121 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd/ReadVariableOp__23 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d/transpose__8 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d/BiasAdd/ReadVariableOp__11 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/dense/MatMul/ReadVariableOp__622 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp__112 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp_1__113 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp__114 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp_1__115 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp__100 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp_1__101 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp__102 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp_1__103 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp__90 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp_1__91 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp__92 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp_1__93 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp__78 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp_1__79 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp__80 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp_1__81 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp__618 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp_1__619 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp__620 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp_1__621 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp__608 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp_1__609 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp__610 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp_1__611 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp__596 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp_1__597 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp__598 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp_1__599 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp__586 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp_1__587 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp__588 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp_1__589 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp__574 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp_1__575 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp__576 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp_1__577 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp__68 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp_1__69 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp__70 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp_1__71 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp__564 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp_1__565 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp__566 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp_1__567 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp__552 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp_1__553 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp__554 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp_1__555 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp__542 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp_1__543 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp__544 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp_1__545 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp__530 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp_1__531 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp__532 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp_1__533 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp__520 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp_1__521 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp__522 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp_1__523 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp__508 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp_1__509 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp__510 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp_1__511 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp__498 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp_1__499 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp__500 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp_1__501 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp__486 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp_1__487 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp__488 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp_1__489 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp__476 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp_1__477 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp__478 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp_1__479 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp__464 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp_1__465 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp__466 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp_1__467 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp__56 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp_1__57 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp__58 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp_1__59 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp__454 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp_1__455 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp__456 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp_1__457 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp__442 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp_1__443 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp__444 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp_1__445 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp__432 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp_1__433 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp__434 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp_1__435 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp__414 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp_1__415 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp__416 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp_1__417 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp__404 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp_1__405 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp__406 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp_1__407 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp__392 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp_1__393 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp__394 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp_1__395 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp__382 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp_1__383 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp__384 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp_1__385 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp__370 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp_1__371 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp__372 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp_1__373 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp__360 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp_1__361 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp__362 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp_1__363 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp__348 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp_1__349 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp__350 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp_1__351 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp__46 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp_1__47 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp__48 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp_1__49 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp__338 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp_1__339 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp__340 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp_1__341 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp__326 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp_1__327 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp__328 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp_1__329 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp__316 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp_1__317 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp__318 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp_1__319 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp__304 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp_1__305 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp__306 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp_1__307 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp__294 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp_1__295 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp__296 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp_1__297 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp__282 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp_1__283 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp__284 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp_1__285 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp__272 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp_1__273 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp__274 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp_1__275 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp__260 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp_1__261 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp__262 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp_1__263 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp__250 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp_1__251 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp__252 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp_1__253 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp__238 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp_1__239 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp__240 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp_1__241 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp__34 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp_1__35 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp__36 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp_1__37 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp__228 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp_1__229 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp__230 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp_1__231 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp__210 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp_1__211 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp__212 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp_1__213 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp__200 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp_1__201 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp__202 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp_1__203 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp__188 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp_1__189 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp__190 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp_1__191 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp__178 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp_1__179 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp__180 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp_1__181 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp__166 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp_1__167 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp__168 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp_1__169 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp__156 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp_1__157 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp__158 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp_1__159 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp__144 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp_1__145 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp__146 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp_1__147 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp__134 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp_1__135 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp__136 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp_1__137 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp__122 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp_1__123 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp__124 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp_1__125 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp__24 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp_1__25 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp__26 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp_1__27 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization/ReadVariableOp__12 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization/ReadVariableOp_1__13 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp__14 [10/04/2021-21:26:20] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp_1__15 [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__992 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__990 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__992 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__990 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__992:0 for ONNX tensor: QuantLinearNode__992:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__992 [QuantizeLinear] outputs: [QuantLinearNode__992:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__980 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__978 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__980 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__978 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__980:0 for ONNX tensor: QuantLinearNode__980:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__980 [QuantizeLinear] outputs: [QuantLinearNode__980:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__972 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__970 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__972 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__970 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__972:0 for ONNX tensor: QuantLinearNode__972:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__972 [QuantizeLinear] outputs: [QuantLinearNode__972:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__960 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__958 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__960 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__958 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__960:0 for ONNX tensor: QuantLinearNode__960:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__960 [QuantizeLinear] outputs: [QuantLinearNode__960:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__952 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__950 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__952 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__950 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__952:0 for ONNX tensor: QuantLinearNode__952:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__952 [QuantizeLinear] outputs: [QuantLinearNode__952:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__940 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__938 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__940 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__938 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__940:0 for ONNX tensor: QuantLinearNode__940:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__940 [QuantizeLinear] outputs: [QuantLinearNode__940:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__932 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__930 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__932 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__930 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__932:0 for ONNX tensor: QuantLinearNode__932:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__932 [QuantizeLinear] outputs: [QuantLinearNode__932:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__920 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__918 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__920 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__918 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__920:0 for ONNX tensor: QuantLinearNode__920:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__920 [QuantizeLinear] outputs: [QuantLinearNode__920:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__912 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__910 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__912 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__910 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__912:0 for ONNX tensor: QuantLinearNode__912:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__912 [QuantizeLinear] outputs: [QuantLinearNode__912:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__900 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__898 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__900 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__898 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__900:0 for ONNX tensor: QuantLinearNode__900:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__900 [QuantizeLinear] outputs: [QuantLinearNode__900:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__892 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__890 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__892 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__890 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__892:0 for ONNX tensor: QuantLinearNode__892:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__892 [QuantizeLinear] outputs: [QuantLinearNode__892:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__880 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__878 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__880 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__878 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__880:0 for ONNX tensor: QuantLinearNode__880:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__880 [QuantizeLinear] outputs: [QuantLinearNode__880:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__872 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__870 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__872 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__870 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__872:0 for ONNX tensor: QuantLinearNode__872:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__872 [QuantizeLinear] outputs: [QuantLinearNode__872:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__860 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__858 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__860 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__858 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__860:0 for ONNX tensor: QuantLinearNode__860:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__860 [QuantizeLinear] outputs: [QuantLinearNode__860:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__852 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__850 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__852 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__850 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__852:0 for ONNX tensor: QuantLinearNode__852:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__852 [QuantizeLinear] outputs: [QuantLinearNode__852:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__840 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__838 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__840 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__838 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__840:0 for ONNX tensor: QuantLinearNode__840:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__840 [QuantizeLinear] outputs: [QuantLinearNode__840:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__832 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__830 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__832 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 -> (32, 16, 3, 3)[FLOAT]], [quant_scale__830 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__832:0 for ONNX tensor: QuantLinearNode__832:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__832 [QuantizeLinear] outputs: [QuantLinearNode__832:0 -> (32, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__820 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__818 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__820 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 -> (32, 16, 1, 1)[FLOAT]], [quant_scale__818 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__820:0 for ONNX tensor: QuantLinearNode__820:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__820 [QuantizeLinear] outputs: [QuantLinearNode__820:0 -> (32, 16, 1, 1)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__812 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__810 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__812 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__810 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__812:0 for ONNX tensor: QuantLinearNode__812:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__812 [QuantizeLinear] outputs: [QuantLinearNode__812:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__804 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__802 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__804 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__802 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__804:0 for ONNX tensor: QuantLinearNode__804:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__804 [QuantizeLinear] outputs: [QuantLinearNode__804:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__792 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__790 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__792 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__790 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__792:0 for ONNX tensor: QuantLinearNode__792:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__792 [QuantizeLinear] outputs: [QuantLinearNode__792:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__784 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__782 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__784 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__782 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__784:0 for ONNX tensor: QuantLinearNode__784:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__784 [QuantizeLinear] outputs: [QuantLinearNode__784:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__772 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__770 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__772 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__770 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__772:0 for ONNX tensor: QuantLinearNode__772:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__772 [QuantizeLinear] outputs: [QuantLinearNode__772:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__764 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__762 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__764 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__762 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__764:0 for ONNX tensor: QuantLinearNode__764:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__764 [QuantizeLinear] outputs: [QuantLinearNode__764:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__752 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__750 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__752 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__750 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__752:0 for ONNX tensor: QuantLinearNode__752:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__752 [QuantizeLinear] outputs: [QuantLinearNode__752:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__744 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__742 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__744 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__742 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__744:0 for ONNX tensor: QuantLinearNode__744:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__744 [QuantizeLinear] outputs: [QuantLinearNode__744:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__732 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__730 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__732 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__730 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__732:0 for ONNX tensor: QuantLinearNode__732:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__732 [QuantizeLinear] outputs: [QuantLinearNode__732:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__724 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__722 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__724 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__722 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__724:0 for ONNX tensor: QuantLinearNode__724:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__724 [QuantizeLinear] outputs: [QuantLinearNode__724:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__712 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__710 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__712 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__710 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__712:0 for ONNX tensor: QuantLinearNode__712:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__712 [QuantizeLinear] outputs: [QuantLinearNode__712:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__704 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__702 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__704 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__702 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__704:0 for ONNX tensor: QuantLinearNode__704:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__704 [QuantizeLinear] outputs: [QuantLinearNode__704:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__692 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__690 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__692 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__690 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__692:0 for ONNX tensor: QuantLinearNode__692:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__692 [QuantizeLinear] outputs: [QuantLinearNode__692:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__684 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__682 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__684 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__682 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__684:0 for ONNX tensor: QuantLinearNode__684:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__684 [QuantizeLinear] outputs: [QuantLinearNode__684:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__672 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__670 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__672 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__670 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__672:0 for ONNX tensor: QuantLinearNode__672:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__672 [QuantizeLinear] outputs: [QuantLinearNode__672:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__664 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__662 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__664 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__662 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__664:0 for ONNX tensor: QuantLinearNode__664:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__664 [QuantizeLinear] outputs: [QuantLinearNode__664:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__652 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__650 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__652 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__650 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__652:0 for ONNX tensor: QuantLinearNode__652:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__652 [QuantizeLinear] outputs: [QuantLinearNode__652:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__644 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__642 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__644 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__642 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__644:0 for ONNX tensor: QuantLinearNode__644:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__644 [QuantizeLinear] outputs: [QuantLinearNode__644:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__632 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d/transpose__8 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__630 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__632 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d/transpose__8 -> (16, 3, 3, 3)[FLOAT]], [quant_scale__630 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d/transpose__8 for ONNX node: StatefulPartitionedCall/model/quant_conv2d/transpose__8 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__632:0 for ONNX tensor: QuantLinearNode__632:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__632 [QuantizeLinear] outputs: [QuantLinearNode__632:0 -> (16, 3, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__628 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: input_1 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__626 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__628 [QuantizeLinear] inputs: [input_1 -> (-1, 3, 32, 32)[FLOAT]], [quant_scale__626 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__628:0 for ONNX tensor: QuantLinearNode__628:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__628 [QuantizeLinear] outputs: [QuantLinearNode__628:0 -> (-1, 3, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1188 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1186 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1188 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1186 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1188:0 for ONNX tensor: QuantLinearNode__1188:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1188 [QuantizeLinear] outputs: [QuantLinearNode__1188:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1180 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1178 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1180 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1178 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1180:0 for ONNX tensor: QuantLinearNode__1180:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1180 [QuantizeLinear] outputs: [QuantLinearNode__1180:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1168 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1166 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1168 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1166 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1168:0 for ONNX tensor: QuantLinearNode__1168:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1168 [QuantizeLinear] outputs: [QuantLinearNode__1168:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1160 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1158 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1160 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1158 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1160:0 for ONNX tensor: QuantLinearNode__1160:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1160 [QuantizeLinear] outputs: [QuantLinearNode__1160:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1148 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1146 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1148 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1146 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1148:0 for ONNX tensor: QuantLinearNode__1148:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1148 [QuantizeLinear] outputs: [QuantLinearNode__1148:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1140 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1138 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1140 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1138 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1140:0 for ONNX tensor: QuantLinearNode__1140:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1140 [QuantizeLinear] outputs: [QuantLinearNode__1140:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1128 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1126 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1128 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1126 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1128:0 for ONNX tensor: QuantLinearNode__1128:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1128 [QuantizeLinear] outputs: [QuantLinearNode__1128:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1120 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1118 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1120 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1118 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1120:0 for ONNX tensor: QuantLinearNode__1120:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1120 [QuantizeLinear] outputs: [QuantLinearNode__1120:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1108 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1106 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1108 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1106 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1108:0 for ONNX tensor: QuantLinearNode__1108:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1108 [QuantizeLinear] outputs: [QuantLinearNode__1108:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1100 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1098 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1100 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1098 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1100:0 for ONNX tensor: QuantLinearNode__1100:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1100 [QuantizeLinear] outputs: [QuantLinearNode__1100:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1088 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1086 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1088 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1086 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1088:0 for ONNX tensor: QuantLinearNode__1088:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1088 [QuantizeLinear] outputs: [QuantLinearNode__1088:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1080 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1078 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1080 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1078 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1080:0 for ONNX tensor: QuantLinearNode__1080:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1080 [QuantizeLinear] outputs: [QuantLinearNode__1080:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1068 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1066 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1068 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1066 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1068:0 for ONNX tensor: QuantLinearNode__1068:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1068 [QuantizeLinear] outputs: [QuantLinearNode__1068:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1060 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1058 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1060 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1058 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1060:0 for ONNX tensor: QuantLinearNode__1060:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1060 [QuantizeLinear] outputs: [QuantLinearNode__1060:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1048 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1046 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1048 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1046 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1048:0 for ONNX tensor: QuantLinearNode__1048:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1048 [QuantizeLinear] outputs: [QuantLinearNode__1048:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1040 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1038 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1040 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1038 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1040:0 for ONNX tensor: QuantLinearNode__1040:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1040 [QuantizeLinear] outputs: [QuantLinearNode__1040:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1028 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1026 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1028 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1026 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1028:0 for ONNX tensor: QuantLinearNode__1028:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1028 [QuantizeLinear] outputs: [QuantLinearNode__1028:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1020 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1018 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1020 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 -> (64, 32, 3, 3)[FLOAT]], [quant_scale__1018 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1020:0 for ONNX tensor: QuantLinearNode__1020:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1020 [QuantizeLinear] outputs: [QuantLinearNode__1020:0 -> (64, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1008 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1006 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1008 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 -> (64, 32, 1, 1)[FLOAT]], [quant_scale__1006 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1008:0 for ONNX tensor: QuantLinearNode__1008:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1008 [QuantizeLinear] outputs: [QuantLinearNode__1008:0 -> (64, 32, 1, 1)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1000 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__998 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1000 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__998 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1000:0 for ONNX tensor: QuantLinearNode__1000:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1000 [QuantizeLinear] outputs: [QuantLinearNode__1000:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__993 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__992:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__990 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__993 [DequantizeLinear] inputs: [QuantLinearNode__992:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__990 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__993 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__981 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__980:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__978 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__981 [DequantizeLinear] inputs: [QuantLinearNode__980:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__978 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__981 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__973 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__972:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__970 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__973 [DequantizeLinear] inputs: [QuantLinearNode__972:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__970 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__973 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__961 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__960:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__958 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__961 [DequantizeLinear] inputs: [QuantLinearNode__960:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__958 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__961 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__953 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__952:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__950 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__953 [DequantizeLinear] inputs: [QuantLinearNode__952:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__950 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__953 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__941 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__940:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__938 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__941 [DequantizeLinear] inputs: [QuantLinearNode__940:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__938 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__941 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__933 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__932:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__930 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__933 [DequantizeLinear] inputs: [QuantLinearNode__932:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__930 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__933 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__921 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__920:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__918 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__921 [DequantizeLinear] inputs: [QuantLinearNode__920:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__918 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__921 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__913 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__912:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__910 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__913 [DequantizeLinear] inputs: [QuantLinearNode__912:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__910 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__913 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__901 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__900:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__898 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__901 [DequantizeLinear] inputs: [QuantLinearNode__900:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__898 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__901 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__893 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__892:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__890 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__893 [DequantizeLinear] inputs: [QuantLinearNode__892:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__890 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__893 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__881 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__880:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__878 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__881 [DequantizeLinear] inputs: [QuantLinearNode__880:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__878 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__881 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__873 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__872:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__870 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__873 [DequantizeLinear] inputs: [QuantLinearNode__872:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__870 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__873 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__861 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__860:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__858 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__861 [DequantizeLinear] inputs: [QuantLinearNode__860:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__858 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__861 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__853 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__852:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__850 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__853 [DequantizeLinear] inputs: [QuantLinearNode__852:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__850 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__853 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__841 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__840:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__838 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__841 [DequantizeLinear] inputs: [QuantLinearNode__840:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__838 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__841 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__833 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__832:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__830 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__833 [DequantizeLinear] inputs: [QuantLinearNode__832:0 -> (32, 16, 3, 3)[FLOAT]], [quant_scale__830 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__833 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize_1:0 -> (32, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__821 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__820:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__818 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__821 [DequantizeLinear] inputs: [QuantLinearNode__820:0 -> (32, 16, 1, 1)[FLOAT]], [quant_scale__818 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__821 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize_1:0 -> (32, 16, 1, 1)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__813 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__812:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__810 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__813 [DequantizeLinear] inputs: [QuantLinearNode__812:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__810 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__813 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__805 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__804:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__802 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__805 [DequantizeLinear] inputs: [QuantLinearNode__804:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__802 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__805 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__793 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__792:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__790 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__793 [DequantizeLinear] inputs: [QuantLinearNode__792:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__790 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__793 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__785 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__784:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__782 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__785 [DequantizeLinear] inputs: [QuantLinearNode__784:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__782 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__785 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__773 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__772:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__770 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__773 [DequantizeLinear] inputs: [QuantLinearNode__772:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__770 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__773 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__765 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__764:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__762 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__765 [DequantizeLinear] inputs: [QuantLinearNode__764:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__762 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__765 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__753 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__752:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__750 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__753 [DequantizeLinear] inputs: [QuantLinearNode__752:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__750 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__753 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__745 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__744:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__742 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__745 [DequantizeLinear] inputs: [QuantLinearNode__744:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__742 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__745 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__733 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__732:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__730 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__733 [DequantizeLinear] inputs: [QuantLinearNode__732:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__730 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__733 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__725 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__724:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__722 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__725 [DequantizeLinear] inputs: [QuantLinearNode__724:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__722 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__725 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__713 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__712:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__710 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__713 [DequantizeLinear] inputs: [QuantLinearNode__712:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__710 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__713 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__705 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__704:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__702 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__705 [DequantizeLinear] inputs: [QuantLinearNode__704:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__702 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__705 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__693 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__692:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__690 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__693 [DequantizeLinear] inputs: [QuantLinearNode__692:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__690 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__693 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__685 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__684:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__682 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__685 [DequantizeLinear] inputs: [QuantLinearNode__684:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__682 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__685 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__673 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__672:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__670 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__673 [DequantizeLinear] inputs: [QuantLinearNode__672:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__670 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__673 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__665 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__664:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__662 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__665 [DequantizeLinear] inputs: [QuantLinearNode__664:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__662 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__665 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__653 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__652:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__650 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__653 [DequantizeLinear] inputs: [QuantLinearNode__652:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__650 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__653 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__645 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__644:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__642 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__645 [DequantizeLinear] inputs: [QuantLinearNode__644:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__642 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__645 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__633 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__632:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__630 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__633 [DequantizeLinear] inputs: [QuantLinearNode__632:0 -> (16, 3, 3, 3)[FLOAT]], [quant_scale__630 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__633 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize_1:0 -> (16, 3, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__629 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__628:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__626 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__629 [DequantizeLinear] inputs: [QuantLinearNode__628:0 -> (-1, 3, 32, 32)[FLOAT]], [quant_scale__626 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__629 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize:0 -> (-1, 3, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d/BiasAdd/ReadVariableOp__11 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize:0 -> (-1, 3, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize_1:0 -> (16, 3, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d/BiasAdd/ReadVariableOp__11 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 3, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization/ReadVariableOp__12 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization/ReadVariableOp_1__13 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp__14 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp_1__15 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization/ReadVariableOp__12 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization/ReadVariableOp_1__13 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp__14 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp_1__15 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation/Relu for ONNX node: StatefulPartitionedCall/model/activation/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__640 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__634 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__640 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__634 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__640:0 for ONNX tensor: QuantLinearNode__640:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__640 [QuantizeLinear] outputs: [QuantLinearNode__640:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__641 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__640:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__634 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__641 [DequantizeLinear] inputs: [QuantLinearNode__640:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__634 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__641 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd/ReadVariableOp__23 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd/ReadVariableOp__23 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp__24 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp_1__25 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp__26 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp_1__27 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp__24 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp_1__25 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp__26 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp_1__27 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_1/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_1/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_1/Relu for ONNX node: StatefulPartitionedCall/model/activation_1/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_1/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_1/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_1/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_1/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__648 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_1/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__646 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__648 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_1/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__646 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__648:0 for ONNX tensor: QuantLinearNode__648:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__648 [QuantizeLinear] outputs: [QuantLinearNode__648:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__649 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__648:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__646 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__649 [DequantizeLinear] inputs: [QuantLinearNode__648:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__646 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__649 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd/ReadVariableOp__33 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd/ReadVariableOp__33 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp__34 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp_1__35 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp__36 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp_1__37 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp__34 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp_1__35 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp__36 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp_1__37 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__636 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__634 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__636 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__634 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__636:0 for ONNX tensor: QuantLinearNode__636:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__636 [QuantizeLinear] outputs: [QuantLinearNode__636:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__637 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__636:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__634 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__637 [DequantizeLinear] inputs: [QuantLinearNode__636:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__634 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__637 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add/add for ONNX node: StatefulPartitionedCall/model/add/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add/add:0 for ONNX tensor: StatefulPartitionedCall/model/add/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add/add [Add] outputs: [StatefulPartitionedCall/model/add/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_2/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_2/Relu [Relu] inputs: [StatefulPartitionedCall/model/add/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_2/Relu for ONNX node: StatefulPartitionedCall/model/activation_2/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_2/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_2/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_2/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_2/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__660 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_2/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__654 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__660 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_2/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__654 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__660:0 for ONNX tensor: QuantLinearNode__660:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__660 [QuantizeLinear] outputs: [QuantLinearNode__660:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__661 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__660:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__654 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__661 [DequantizeLinear] inputs: [QuantLinearNode__660:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__654 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__661 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd/ReadVariableOp__45 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd/ReadVariableOp__45 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp__46 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp_1__47 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp__48 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp_1__49 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp__46 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp_1__47 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp__48 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp_1__49 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_3/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_3/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_3/Relu for ONNX node: StatefulPartitionedCall/model/activation_3/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_3/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_3/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_3/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_3/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__668 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_3/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__666 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__668 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_3/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__666 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__668:0 for ONNX tensor: QuantLinearNode__668:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__668 [QuantizeLinear] outputs: [QuantLinearNode__668:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__669 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__668:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__666 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__669 [DequantizeLinear] inputs: [QuantLinearNode__668:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__666 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__669 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd/ReadVariableOp__55 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd/ReadVariableOp__55 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp__56 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp_1__57 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp__58 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp_1__59 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp__56 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp_1__57 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp__58 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp_1__59 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__656 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_2/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__654 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__656 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_2/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__654 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__656:0 for ONNX tensor: QuantLinearNode__656:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__656 [QuantizeLinear] outputs: [QuantLinearNode__656:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__657 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__656:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__654 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__657 [DequantizeLinear] inputs: [QuantLinearNode__656:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__654 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_1/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_1/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__657 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_1/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_1/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_1/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_1/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_1/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_1/add for ONNX node: StatefulPartitionedCall/model/add_1/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_1/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_1/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_1/add [Add] outputs: [StatefulPartitionedCall/model/add_1/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_4/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_1/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_4/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_1/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_4/Relu for ONNX node: StatefulPartitionedCall/model/activation_4/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_4/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_4/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_4/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__680 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_4/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__674 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__680 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_4/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__674 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__680:0 for ONNX tensor: QuantLinearNode__680:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__680 [QuantizeLinear] outputs: [QuantLinearNode__680:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__681 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__680:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__674 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__681 [DequantizeLinear] inputs: [QuantLinearNode__680:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__674 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__681 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd/ReadVariableOp__67 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd/ReadVariableOp__67 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp__68 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp_1__69 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp__70 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp_1__71 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp__68 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp_1__69 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp__70 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp_1__71 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_5/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_5/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_5/Relu for ONNX node: StatefulPartitionedCall/model/activation_5/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_5/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_5/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_5/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_5/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__688 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_5/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__686 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__688 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_5/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__686 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__688:0 for ONNX tensor: QuantLinearNode__688:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__688 [QuantizeLinear] outputs: [QuantLinearNode__688:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__689 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__688:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__686 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__689 [DequantizeLinear] inputs: [QuantLinearNode__688:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__686 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__689 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd/ReadVariableOp__77 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd/ReadVariableOp__77 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp__78 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp_1__79 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp__80 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp_1__81 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp__78 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp_1__79 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp__80 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp_1__81 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__676 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_4/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__674 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__676 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_4/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__674 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__676:0 for ONNX tensor: QuantLinearNode__676:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__676 [QuantizeLinear] outputs: [QuantLinearNode__676:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__677 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__676:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__674 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__677 [DequantizeLinear] inputs: [QuantLinearNode__676:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__674 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_2/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_2/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__677 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_2/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_2/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_2/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_2/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_2/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_2/add for ONNX node: StatefulPartitionedCall/model/add_2/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_2/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_2/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_2/add [Add] outputs: [StatefulPartitionedCall/model/add_2/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_6/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_2/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_6/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_2/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_6/Relu for ONNX node: StatefulPartitionedCall/model/activation_6/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_6/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_6/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_6/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_6/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__700 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_6/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__694 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__700 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_6/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__694 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__700:0 for ONNX tensor: QuantLinearNode__700:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__700 [QuantizeLinear] outputs: [QuantLinearNode__700:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__701 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__700:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__694 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__701 [DequantizeLinear] inputs: [QuantLinearNode__700:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__694 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__701 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd/ReadVariableOp__89 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd/ReadVariableOp__89 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp__90 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp_1__91 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp__92 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp_1__93 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp__90 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp_1__91 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp__92 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp_1__93 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_7/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_7/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_7/Relu for ONNX node: StatefulPartitionedCall/model/activation_7/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_7/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_7/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_7/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_7/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__708 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_7/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__706 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__708 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_7/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__706 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__708:0 for ONNX tensor: QuantLinearNode__708:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__708 [QuantizeLinear] outputs: [QuantLinearNode__708:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__709 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__708:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__706 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__709 [DequantizeLinear] inputs: [QuantLinearNode__708:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__706 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__709 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd/ReadVariableOp__99 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd/ReadVariableOp__99 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp__100 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp_1__101 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp__102 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp_1__103 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp__100 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp_1__101 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp__102 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp_1__103 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__696 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_6/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__694 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__696 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_6/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__694 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__696:0 for ONNX tensor: QuantLinearNode__696:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__696 [QuantizeLinear] outputs: [QuantLinearNode__696:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__697 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__696:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__694 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__697 [DequantizeLinear] inputs: [QuantLinearNode__696:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__694 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_3/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_3/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__697 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_3/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_3/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_3/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_3/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_3/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_3/add for ONNX node: StatefulPartitionedCall/model/add_3/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_3/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_3/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_3/add [Add] outputs: [StatefulPartitionedCall/model/add_3/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_8/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_3/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_8/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_3/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_8/Relu for ONNX node: StatefulPartitionedCall/model/activation_8/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_8/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_8/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_8/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_8/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__720 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_8/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__714 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__720 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_8/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__714 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__720:0 for ONNX tensor: QuantLinearNode__720:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__720 [QuantizeLinear] outputs: [QuantLinearNode__720:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__721 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__720:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__714 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__721 [DequantizeLinear] inputs: [QuantLinearNode__720:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__714 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__721 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd/ReadVariableOp__111 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd/ReadVariableOp__111 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp__112 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp_1__113 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp__114 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp_1__115 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp__112 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp_1__113 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp__114 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp_1__115 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_9/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_9/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_9/Relu for ONNX node: StatefulPartitionedCall/model/activation_9/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_9/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_9/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_9/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_9/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__728 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_9/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__726 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__728 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_9/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__726 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__728:0 for ONNX tensor: QuantLinearNode__728:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__728 [QuantizeLinear] outputs: [QuantLinearNode__728:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__729 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__728:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__726 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__729 [DequantizeLinear] inputs: [QuantLinearNode__728:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__726 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__729 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd/ReadVariableOp__121 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd/ReadVariableOp__121 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp__122 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp_1__123 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp__124 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp_1__125 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp__122 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp_1__123 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp__124 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp_1__125 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__716 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_8/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__714 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__716 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_8/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__714 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__716:0 for ONNX tensor: QuantLinearNode__716:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__716 [QuantizeLinear] outputs: [QuantLinearNode__716:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__717 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__716:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__714 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__717 [DequantizeLinear] inputs: [QuantLinearNode__716:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__714 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_4/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_4/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__717 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_4/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_4/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_4/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_4/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_4/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_4/add for ONNX node: StatefulPartitionedCall/model/add_4/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_4/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_4/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_4/add [Add] outputs: [StatefulPartitionedCall/model/add_4/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_10/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_4/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_10/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_4/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_10/Relu for ONNX node: StatefulPartitionedCall/model/activation_10/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_10/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_10/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_10/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_10/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__740 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_10/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__738 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__740 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_10/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__738 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__740:0 for ONNX tensor: QuantLinearNode__740:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__740 [QuantizeLinear] outputs: [QuantLinearNode__740:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__741 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__740:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__738 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__741 [DequantizeLinear] inputs: [QuantLinearNode__740:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__738 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__741 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd/ReadVariableOp__133 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd/ReadVariableOp__133 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp__134 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp_1__135 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp__136 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp_1__137 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp__134 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp_1__135 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp__136 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp_1__137 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_11/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_11/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_11/Relu for ONNX node: StatefulPartitionedCall/model/activation_11/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_11/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_11/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_11/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_11/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__748 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_11/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__746 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__748 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_11/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__746 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__748:0 for ONNX tensor: QuantLinearNode__748:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__748 [QuantizeLinear] outputs: [QuantLinearNode__748:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__749 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__748:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__746 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__749 [DequantizeLinear] inputs: [QuantLinearNode__748:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__746 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__749 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd/ReadVariableOp__143 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd/ReadVariableOp__143 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp__144 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp_1__145 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp__146 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp_1__147 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp__144 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp_1__145 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp__146 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp_1__147 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__736 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_10/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__738 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__736 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_10/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__738 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__736:0 for ONNX tensor: QuantLinearNode__736:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__736 [QuantizeLinear] outputs: [QuantLinearNode__736:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__737 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__736:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__738 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__737 [DequantizeLinear] inputs: [QuantLinearNode__736:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__738 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_5/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_5/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__737 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_5/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_5/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_5/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_5/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_5/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_5/add for ONNX node: StatefulPartitionedCall/model/add_5/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_5/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_5/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_5/add [Add] outputs: [StatefulPartitionedCall/model/add_5/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_12/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_5/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_12/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_5/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_12/Relu for ONNX node: StatefulPartitionedCall/model/activation_12/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_12/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_12/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_12/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_12/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__760 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_12/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__758 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__760 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_12/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__758 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__760:0 for ONNX tensor: QuantLinearNode__760:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__760 [QuantizeLinear] outputs: [QuantLinearNode__760:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__761 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__760:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__758 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__761 [DequantizeLinear] inputs: [QuantLinearNode__760:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__758 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__761 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd/ReadVariableOp__155 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd/ReadVariableOp__155 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp__156 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp_1__157 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp__158 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp_1__159 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp__156 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp_1__157 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp__158 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp_1__159 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_13/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_13/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_13/Relu for ONNX node: StatefulPartitionedCall/model/activation_13/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_13/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_13/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_13/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_13/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__768 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_13/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__766 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__768 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_13/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__766 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__768:0 for ONNX tensor: QuantLinearNode__768:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__768 [QuantizeLinear] outputs: [QuantLinearNode__768:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__769 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__768:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__766 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__769 [DequantizeLinear] inputs: [QuantLinearNode__768:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__766 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__769 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd/ReadVariableOp__165 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd/ReadVariableOp__165 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp__166 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp_1__167 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp__168 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp_1__169 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp__166 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp_1__167 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp__168 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp_1__169 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__756 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_12/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__758 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__756 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_12/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__758 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__756:0 for ONNX tensor: QuantLinearNode__756:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__756 [QuantizeLinear] outputs: [QuantLinearNode__756:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__757 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__756:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__758 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__757 [DequantizeLinear] inputs: [QuantLinearNode__756:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__758 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_6/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_6/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__757 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_6/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_6/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_6/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_6/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_6/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_6/add for ONNX node: StatefulPartitionedCall/model/add_6/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_6/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_6/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_6/add [Add] outputs: [StatefulPartitionedCall/model/add_6/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_14/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_6/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_14/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_6/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_14/Relu for ONNX node: StatefulPartitionedCall/model/activation_14/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_14/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_14/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_14/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_14/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__780 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_14/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__774 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__780 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_14/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__774 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__780:0 for ONNX tensor: QuantLinearNode__780:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__780 [QuantizeLinear] outputs: [QuantLinearNode__780:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__781 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__780:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__774 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__781 [DequantizeLinear] inputs: [QuantLinearNode__780:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__774 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__781 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd/ReadVariableOp__177 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd/ReadVariableOp__177 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp__178 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp_1__179 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp__180 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp_1__181 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp__178 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp_1__179 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp__180 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp_1__181 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_15/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_15/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_15/Relu for ONNX node: StatefulPartitionedCall/model/activation_15/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_15/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_15/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_15/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_15/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__788 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_15/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__786 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__788 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_15/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__786 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__788:0 for ONNX tensor: QuantLinearNode__788:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__788 [QuantizeLinear] outputs: [QuantLinearNode__788:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__789 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__788:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__786 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__789 [DequantizeLinear] inputs: [QuantLinearNode__788:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__786 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__789 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd/ReadVariableOp__187 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd/ReadVariableOp__187 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp__188 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp_1__189 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp__190 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp_1__191 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp__188 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp_1__189 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp__190 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp_1__191 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__776 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_14/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__774 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__776 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_14/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__774 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__776:0 for ONNX tensor: QuantLinearNode__776:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__776 [QuantizeLinear] outputs: [QuantLinearNode__776:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__777 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__776:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__774 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__777 [DequantizeLinear] inputs: [QuantLinearNode__776:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__774 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_7/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_7/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__777 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_7/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_7/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_7/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_7/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_7/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_7/add for ONNX node: StatefulPartitionedCall/model/add_7/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_7/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_7/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_7/add [Add] outputs: [StatefulPartitionedCall/model/add_7/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_16/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_7/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_16/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_7/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_16/Relu for ONNX node: StatefulPartitionedCall/model/activation_16/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_16/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_16/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_16/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_16/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__800 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_16/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__794 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__800 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_16/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__794 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__800:0 for ONNX tensor: QuantLinearNode__800:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__800 [QuantizeLinear] outputs: [QuantLinearNode__800:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__801 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__800:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__794 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__801 [DequantizeLinear] inputs: [QuantLinearNode__800:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__794 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__801 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd/ReadVariableOp__199 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd/ReadVariableOp__199 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp__200 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp_1__201 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp__202 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp_1__203 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp__200 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp_1__201 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp__202 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp_1__203 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_17/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_17/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_17/Relu for ONNX node: StatefulPartitionedCall/model/activation_17/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_17/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_17/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_17/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_17/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__808 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_17/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__806 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__808 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_17/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__806 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__808:0 for ONNX tensor: QuantLinearNode__808:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__808 [QuantizeLinear] outputs: [QuantLinearNode__808:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__809 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__808:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__806 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__809 [DequantizeLinear] inputs: [QuantLinearNode__808:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__806 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__809 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd/ReadVariableOp__209 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd/ReadVariableOp__209 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp__210 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp_1__211 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp__212 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp_1__213 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp__210 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp_1__211 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp__212 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp_1__213 -> (16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__796 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_16/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__794 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__796 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_16/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__794 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__796:0 for ONNX tensor: QuantLinearNode__796:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__796 [QuantizeLinear] outputs: [QuantLinearNode__796:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__797 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__796:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__794 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__797 [DequantizeLinear] inputs: [QuantLinearNode__796:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__794 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_8/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_8/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__797 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_8/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_8/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_8/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_8/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_8/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_8/add for ONNX node: StatefulPartitionedCall/model/add_8/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_8/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_8/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_8/add [Add] outputs: [StatefulPartitionedCall/model/add_8/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_18/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_8/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_18/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_8/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_18/Relu for ONNX node: StatefulPartitionedCall/model/activation_18/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_18/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_18/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_18/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_18/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__828 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_18/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__826 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__828 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_18/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__826 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__828:0 for ONNX tensor: QuantLinearNode__828:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__828 [QuantizeLinear] outputs: [QuantLinearNode__828:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__829 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__828:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__826 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__829 [DequantizeLinear] inputs: [QuantLinearNode__828:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__826 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__829 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd/ReadVariableOp__227 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize_1:0 -> (32, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd/ReadVariableOp__227 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp__228 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp_1__229 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp__230 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp_1__231 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp__228 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp_1__229 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp__230 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp_1__231 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_19/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_19/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_19/Relu for ONNX node: StatefulPartitionedCall/model/activation_19/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_19/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_19/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_19/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_19/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__836 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_19/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__834 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__836 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_19/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__834 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__836:0 for ONNX tensor: QuantLinearNode__836:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__836 [QuantizeLinear] outputs: [QuantLinearNode__836:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__837 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__836:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__834 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__837 [DequantizeLinear] inputs: [QuantLinearNode__836:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__834 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__837 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd/ReadVariableOp__237 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd/ReadVariableOp__237 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp__238 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp_1__239 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp__240 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp_1__241 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp__238 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp_1__239 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp__240 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp_1__241 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__816 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_18/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__826 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__816 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_18/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__826 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__816:0 for ONNX tensor: QuantLinearNode__816:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__816 [QuantizeLinear] outputs: [QuantLinearNode__816:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__817 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__816:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__826 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__817 [DequantizeLinear] inputs: [QuantLinearNode__816:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__826 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__817 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd/ReadVariableOp__219 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize_1:0 -> (32, 16, 1, 1)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd/ReadVariableOp__219 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__824 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__822 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__824 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__822 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__824:0 for ONNX tensor: QuantLinearNode__824:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__824 [QuantizeLinear] outputs: [QuantLinearNode__824:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__825 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__824:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__822 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__825 [DequantizeLinear] inputs: [QuantLinearNode__824:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__822 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_9/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_9/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__825 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_9/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_9/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_9/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_9/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_9/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_9/add for ONNX node: StatefulPartitionedCall/model/add_9/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_9/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_9/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_9/add [Add] outputs: [StatefulPartitionedCall/model/add_9/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_20/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_9/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_20/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_9/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_20/Relu for ONNX node: StatefulPartitionedCall/model/activation_20/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_20/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_20/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_20/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_20/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__848 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_20/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__846 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__848 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_20/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__846 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__848:0 for ONNX tensor: QuantLinearNode__848:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__848 [QuantizeLinear] outputs: [QuantLinearNode__848:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__849 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__848:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__846 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__849 [DequantizeLinear] inputs: [QuantLinearNode__848:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__846 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__849 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd/ReadVariableOp__249 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd/ReadVariableOp__249 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp__250 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp_1__251 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp__252 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp_1__253 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp__250 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp_1__251 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp__252 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp_1__253 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_21/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_21/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_21/Relu for ONNX node: StatefulPartitionedCall/model/activation_21/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_21/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_21/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_21/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_21/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__856 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_21/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__854 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__856 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_21/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__854 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__856:0 for ONNX tensor: QuantLinearNode__856:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__856 [QuantizeLinear] outputs: [QuantLinearNode__856:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__857 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__856:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__854 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__857 [DequantizeLinear] inputs: [QuantLinearNode__856:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__854 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__857 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd/ReadVariableOp__259 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd/ReadVariableOp__259 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp__260 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp_1__261 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp__262 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp_1__263 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp__260 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp_1__261 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp__262 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp_1__263 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__844 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_20/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__846 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__844 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_20/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__846 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__844:0 for ONNX tensor: QuantLinearNode__844:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__844 [QuantizeLinear] outputs: [QuantLinearNode__844:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__845 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__844:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__846 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__845 [DequantizeLinear] inputs: [QuantLinearNode__844:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__846 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_10/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_10/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__845 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_10/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_10/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_10/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_10/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_10/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_10/add for ONNX node: StatefulPartitionedCall/model/add_10/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_10/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_10/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_10/add [Add] outputs: [StatefulPartitionedCall/model/add_10/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_22/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_10/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_22/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_10/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_22/Relu for ONNX node: StatefulPartitionedCall/model/activation_22/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_22/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_22/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_22/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_22/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__868 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_22/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__866 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__868 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_22/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__866 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__868:0 for ONNX tensor: QuantLinearNode__868:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__868 [QuantizeLinear] outputs: [QuantLinearNode__868:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__869 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__868:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__866 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__869 [DequantizeLinear] inputs: [QuantLinearNode__868:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__866 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__869 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd/ReadVariableOp__271 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd/ReadVariableOp__271 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp__272 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp_1__273 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp__274 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp_1__275 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp__272 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp_1__273 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp__274 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp_1__275 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_23/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_23/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_23/Relu for ONNX node: StatefulPartitionedCall/model/activation_23/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_23/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_23/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_23/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_23/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__876 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_23/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__874 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__876 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_23/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__874 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__876:0 for ONNX tensor: QuantLinearNode__876:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__876 [QuantizeLinear] outputs: [QuantLinearNode__876:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__877 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__876:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__874 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__877 [DequantizeLinear] inputs: [QuantLinearNode__876:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__874 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__877 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd/ReadVariableOp__281 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd/ReadVariableOp__281 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp__282 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp_1__283 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp__284 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp_1__285 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp__282 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp_1__283 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp__284 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp_1__285 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__864 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_22/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__866 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__864 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_22/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__866 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__864:0 for ONNX tensor: QuantLinearNode__864:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__864 [QuantizeLinear] outputs: [QuantLinearNode__864:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__865 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__864:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__866 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__865 [DequantizeLinear] inputs: [QuantLinearNode__864:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__866 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_11/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_11/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__865 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_11/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_11/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_11/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_11/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_11/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_11/add for ONNX node: StatefulPartitionedCall/model/add_11/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_11/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_11/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_11/add [Add] outputs: [StatefulPartitionedCall/model/add_11/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_24/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_11/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_24/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_11/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_24/Relu for ONNX node: StatefulPartitionedCall/model/activation_24/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_24/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_24/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_24/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_24/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__888 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_24/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__886 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__888 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_24/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__886 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__888:0 for ONNX tensor: QuantLinearNode__888:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__888 [QuantizeLinear] outputs: [QuantLinearNode__888:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__889 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__888:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__886 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__889 [DequantizeLinear] inputs: [QuantLinearNode__888:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__886 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__889 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd/ReadVariableOp__293 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd/ReadVariableOp__293 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp__294 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp_1__295 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp__296 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp_1__297 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp__294 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp_1__295 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp__296 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp_1__297 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_25/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_25/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_25/Relu for ONNX node: StatefulPartitionedCall/model/activation_25/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_25/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_25/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_25/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_25/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__896 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_25/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__894 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__896 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_25/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__894 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__896:0 for ONNX tensor: QuantLinearNode__896:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__896 [QuantizeLinear] outputs: [QuantLinearNode__896:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__897 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__896:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__894 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__897 [DequantizeLinear] inputs: [QuantLinearNode__896:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__894 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__897 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd/ReadVariableOp__303 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd/ReadVariableOp__303 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp__304 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp_1__305 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp__306 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp_1__307 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp__304 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp_1__305 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp__306 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp_1__307 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__884 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_24/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__886 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__884 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_24/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__886 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__884:0 for ONNX tensor: QuantLinearNode__884:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__884 [QuantizeLinear] outputs: [QuantLinearNode__884:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__885 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__884:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__886 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__885 [DequantizeLinear] inputs: [QuantLinearNode__884:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__886 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_12/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_12/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__885 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_12/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_12/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_12/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_12/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_12/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_12/add for ONNX node: StatefulPartitionedCall/model/add_12/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_12/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_12/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_12/add [Add] outputs: [StatefulPartitionedCall/model/add_12/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_26/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_12/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_26/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_12/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_26/Relu for ONNX node: StatefulPartitionedCall/model/activation_26/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_26/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_26/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_26/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_26/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__908 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_26/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__902 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__908 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_26/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__902 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__908:0 for ONNX tensor: QuantLinearNode__908:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__908 [QuantizeLinear] outputs: [QuantLinearNode__908:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__909 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__908:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__902 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__909 [DequantizeLinear] inputs: [QuantLinearNode__908:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__902 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__909 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd/ReadVariableOp__315 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd/ReadVariableOp__315 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp__316 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp_1__317 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp__318 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp_1__319 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp__316 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp_1__317 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp__318 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp_1__319 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_27/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_27/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_27/Relu for ONNX node: StatefulPartitionedCall/model/activation_27/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_27/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_27/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_27/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_27/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__916 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_27/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__914 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__916 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_27/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__914 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__916:0 for ONNX tensor: QuantLinearNode__916:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__916 [QuantizeLinear] outputs: [QuantLinearNode__916:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__917 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__916:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__914 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__917 [DequantizeLinear] inputs: [QuantLinearNode__916:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__914 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__917 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd/ReadVariableOp__325 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd/ReadVariableOp__325 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp__326 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp_1__327 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp__328 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp_1__329 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp__326 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp_1__327 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp__328 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp_1__329 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__904 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_26/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__902 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__904 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_26/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__902 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__904:0 for ONNX tensor: QuantLinearNode__904:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__904 [QuantizeLinear] outputs: [QuantLinearNode__904:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__905 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__904:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__902 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__905 [DequantizeLinear] inputs: [QuantLinearNode__904:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__902 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_13/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_13/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__905 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_13/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_13/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_13/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_13/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_13/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_13/add for ONNX node: StatefulPartitionedCall/model/add_13/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_13/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_13/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_13/add [Add] outputs: [StatefulPartitionedCall/model/add_13/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_28/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_13/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_28/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_13/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_28/Relu for ONNX node: StatefulPartitionedCall/model/activation_28/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_28/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_28/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_28/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_28/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__928 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_28/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__926 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__928 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_28/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__926 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__928:0 for ONNX tensor: QuantLinearNode__928:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__928 [QuantizeLinear] outputs: [QuantLinearNode__928:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__929 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__928:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__926 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__929 [DequantizeLinear] inputs: [QuantLinearNode__928:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__926 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__929 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd/ReadVariableOp__337 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd/ReadVariableOp__337 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp__338 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp_1__339 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp__340 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp_1__341 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp__338 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp_1__339 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp__340 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp_1__341 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_29/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_29/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_29/Relu for ONNX node: StatefulPartitionedCall/model/activation_29/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_29/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_29/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_29/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_29/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__936 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_29/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__934 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__936 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_29/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__934 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__936:0 for ONNX tensor: QuantLinearNode__936:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__936 [QuantizeLinear] outputs: [QuantLinearNode__936:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__937 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__936:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__934 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__937 [DequantizeLinear] inputs: [QuantLinearNode__936:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__934 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__937 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd/ReadVariableOp__347 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd/ReadVariableOp__347 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp__348 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp_1__349 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp__350 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp_1__351 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp__348 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp_1__349 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp__350 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp_1__351 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__924 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_28/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__926 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__924 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_28/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__926 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__924:0 for ONNX tensor: QuantLinearNode__924:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__924 [QuantizeLinear] outputs: [QuantLinearNode__924:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__925 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__924:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__926 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__925 [DequantizeLinear] inputs: [QuantLinearNode__924:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__926 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_14/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_14/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__925 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_14/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_14/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_14/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_14/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_14/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_14/add for ONNX node: StatefulPartitionedCall/model/add_14/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_14/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_14/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_14/add [Add] outputs: [StatefulPartitionedCall/model/add_14/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_30/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_14/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_30/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_14/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_30/Relu for ONNX node: StatefulPartitionedCall/model/activation_30/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_30/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_30/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_30/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_30/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__948 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_30/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__942 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__948 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_30/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__942 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__948:0 for ONNX tensor: QuantLinearNode__948:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__948 [QuantizeLinear] outputs: [QuantLinearNode__948:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__949 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__948:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__942 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__949 [DequantizeLinear] inputs: [QuantLinearNode__948:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__942 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__949 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd/ReadVariableOp__359 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd/ReadVariableOp__359 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp__360 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp_1__361 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp__362 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp_1__363 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp__360 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp_1__361 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp__362 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp_1__363 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_31/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_31/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_31/Relu for ONNX node: StatefulPartitionedCall/model/activation_31/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_31/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_31/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_31/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_31/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__956 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_31/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__954 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__956 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_31/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__954 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__956:0 for ONNX tensor: QuantLinearNode__956:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__956 [QuantizeLinear] outputs: [QuantLinearNode__956:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__957 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__956:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__954 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__957 [DequantizeLinear] inputs: [QuantLinearNode__956:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__954 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__957 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd/ReadVariableOp__369 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd/ReadVariableOp__369 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp__370 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp_1__371 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp__372 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp_1__373 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp__370 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp_1__371 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp__372 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp_1__373 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__944 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_30/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__942 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__944 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_30/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__942 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__944:0 for ONNX tensor: QuantLinearNode__944:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__944 [QuantizeLinear] outputs: [QuantLinearNode__944:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__945 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__944:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__942 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__945 [DequantizeLinear] inputs: [QuantLinearNode__944:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__942 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_15/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_15/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__945 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_15/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_15/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_15/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_15/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_15/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_15/add for ONNX node: StatefulPartitionedCall/model/add_15/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_15/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_15/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_15/add [Add] outputs: [StatefulPartitionedCall/model/add_15/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_32/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_15/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_32/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_15/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_32/Relu for ONNX node: StatefulPartitionedCall/model/activation_32/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_32/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_32/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_32/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_32/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__968 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_32/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__966 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__968 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_32/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__966 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__968:0 for ONNX tensor: QuantLinearNode__968:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__968 [QuantizeLinear] outputs: [QuantLinearNode__968:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__969 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__968:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__966 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__969 [DequantizeLinear] inputs: [QuantLinearNode__968:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__966 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__969 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd/ReadVariableOp__381 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd/ReadVariableOp__381 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp__382 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp_1__383 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp__384 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp_1__385 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp__382 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp_1__383 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp__384 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp_1__385 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_33/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_33/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_33/Relu for ONNX node: StatefulPartitionedCall/model/activation_33/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_33/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_33/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_33/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_33/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__976 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_33/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__974 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__976 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_33/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__974 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__976:0 for ONNX tensor: QuantLinearNode__976:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__976 [QuantizeLinear] outputs: [QuantLinearNode__976:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__977 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__976:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__974 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__977 [DequantizeLinear] inputs: [QuantLinearNode__976:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__974 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__977 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd/ReadVariableOp__391 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd/ReadVariableOp__391 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp__392 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp_1__393 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp__394 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp_1__395 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp__392 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp_1__393 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp__394 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp_1__395 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__964 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_32/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__966 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__964 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_32/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__966 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__964:0 for ONNX tensor: QuantLinearNode__964:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__964 [QuantizeLinear] outputs: [QuantLinearNode__964:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__965 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__964:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__966 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__965 [DequantizeLinear] inputs: [QuantLinearNode__964:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__966 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_16/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_16/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__965 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_16/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_16/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_16/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_16/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_16/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_16/add for ONNX node: StatefulPartitionedCall/model/add_16/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_16/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_16/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_16/add [Add] outputs: [StatefulPartitionedCall/model/add_16/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_34/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_16/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_34/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_16/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_34/Relu for ONNX node: StatefulPartitionedCall/model/activation_34/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_34/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_34/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_34/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_34/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__988 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_34/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__986 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__988 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_34/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__986 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__988:0 for ONNX tensor: QuantLinearNode__988:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__988 [QuantizeLinear] outputs: [QuantLinearNode__988:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__989 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__988:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__986 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__989 [DequantizeLinear] inputs: [QuantLinearNode__988:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__986 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__989 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd/ReadVariableOp__403 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd/ReadVariableOp__403 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp__404 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp_1__405 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp__406 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp_1__407 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp__404 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp_1__405 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp__406 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp_1__407 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_35/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_35/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_35/Relu for ONNX node: StatefulPartitionedCall/model/activation_35/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_35/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_35/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_35/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_35/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__996 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_35/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__994 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__996 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_35/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__994 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__996:0 for ONNX tensor: QuantLinearNode__996:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__996 [QuantizeLinear] outputs: [QuantLinearNode__996:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__997 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__996:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__994 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__997 [DequantizeLinear] inputs: [QuantLinearNode__996:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__994 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__997 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__984 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_34/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__986 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__984 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_34/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__986 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__984:0 for ONNX tensor: QuantLinearNode__984:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__984 [QuantizeLinear] outputs: [QuantLinearNode__984:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__985 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__984:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__986 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__985 [DequantizeLinear] inputs: [QuantLinearNode__984:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__986 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_17/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_17/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__985 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_17/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1189 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1188:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1186 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1189 [DequantizeLinear] inputs: [QuantLinearNode__1188:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1186 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1189 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1181 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1180:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1178 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1181 [DequantizeLinear] inputs: [QuantLinearNode__1180:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1178 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1181 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1169 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1168:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1166 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1169 [DequantizeLinear] inputs: [QuantLinearNode__1168:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1166 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1169 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1161 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1160:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1158 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1161 [DequantizeLinear] inputs: [QuantLinearNode__1160:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1158 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1161 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1149 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1148:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1146 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1149 [DequantizeLinear] inputs: [QuantLinearNode__1148:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1146 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1149 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1141 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1140:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1138 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1141 [DequantizeLinear] inputs: [QuantLinearNode__1140:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1138 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1141 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1129 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1128:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1126 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1129 [DequantizeLinear] inputs: [QuantLinearNode__1128:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1126 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1129 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1121 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1120:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1118 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1121 [DequantizeLinear] inputs: [QuantLinearNode__1120:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1118 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1121 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1109 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1108:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1106 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1109 [DequantizeLinear] inputs: [QuantLinearNode__1108:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1106 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1109 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1101 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1100:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1098 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1101 [DequantizeLinear] inputs: [QuantLinearNode__1100:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1098 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1101 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1089 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1088:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1086 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1089 [DequantizeLinear] inputs: [QuantLinearNode__1088:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1086 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1089 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1081 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1080:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1078 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1081 [DequantizeLinear] inputs: [QuantLinearNode__1080:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1078 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1081 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1069 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1068:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1066 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1069 [DequantizeLinear] inputs: [QuantLinearNode__1068:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1066 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1069 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1061 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1060:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1058 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1061 [DequantizeLinear] inputs: [QuantLinearNode__1060:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1058 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1061 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1049 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1048:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1046 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1049 [DequantizeLinear] inputs: [QuantLinearNode__1048:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1046 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1049 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1041 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1040:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1038 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1041 [DequantizeLinear] inputs: [QuantLinearNode__1040:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1038 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1041 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1029 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1028:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1026 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1029 [DequantizeLinear] inputs: [QuantLinearNode__1028:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1026 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1029 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1021 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1020:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1018 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1021 [DequantizeLinear] inputs: [QuantLinearNode__1020:0 -> (64, 32, 3, 3)[FLOAT]], [quant_scale__1018 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1021 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize_1:0 -> (64, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1009 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1008:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1006 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1009 [DequantizeLinear] inputs: [QuantLinearNode__1008:0 -> (64, 32, 1, 1)[FLOAT]], [quant_scale__1006 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1009 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize_1:0 -> (64, 32, 1, 1)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1001 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1000:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__998 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1001 [DequantizeLinear] inputs: [QuantLinearNode__1000:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__998 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1001 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd/ReadVariableOp__413 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd/ReadVariableOp__413 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp__414 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp_1__415 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp__416 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp_1__417 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp__414 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp_1__415 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp__416 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp_1__417 -> (32)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_17/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_17/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_17/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_17/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_17/add for ONNX node: StatefulPartitionedCall/model/add_17/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_17/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_17/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_17/add [Add] outputs: [StatefulPartitionedCall/model/add_17/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_36/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_17/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_36/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_17/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_36/Relu for ONNX node: StatefulPartitionedCall/model/activation_36/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_36/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_36/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_36/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_36/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1016 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_36/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1002 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1016 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_36/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__1002 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1016:0 for ONNX tensor: QuantLinearNode__1016:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1016 [QuantizeLinear] outputs: [QuantLinearNode__1016:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1017 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1016:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1002 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1017 [DequantizeLinear] inputs: [QuantLinearNode__1016:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__1002 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1017 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd/ReadVariableOp__431 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize_1:0 -> (64, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd/ReadVariableOp__431 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp__432 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp_1__433 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp__434 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp_1__435 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp__432 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp_1__433 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp__434 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp_1__435 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_37/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_37/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_37/Relu for ONNX node: StatefulPartitionedCall/model/activation_37/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_37/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_37/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_37/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_37/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1024 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_37/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1022 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1024 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_37/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1022 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1024:0 for ONNX tensor: QuantLinearNode__1024:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1024 [QuantizeLinear] outputs: [QuantLinearNode__1024:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1025 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1024:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1022 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1025 [DequantizeLinear] inputs: [QuantLinearNode__1024:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1022 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1025 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd/ReadVariableOp__441 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd/ReadVariableOp__441 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp__442 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp_1__443 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp__444 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp_1__445 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp__442 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp_1__443 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp__444 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp_1__445 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1004 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_36/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1002 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1004 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_36/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__1002 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1004:0 for ONNX tensor: QuantLinearNode__1004:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1004 [QuantizeLinear] outputs: [QuantLinearNode__1004:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1005 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1004:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1002 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1005 [DequantizeLinear] inputs: [QuantLinearNode__1004:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__1002 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1005 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd/ReadVariableOp__423 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize_1:0 -> (64, 32, 1, 1)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd/ReadVariableOp__423 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1012 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1010 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1012 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1010 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1012:0 for ONNX tensor: QuantLinearNode__1012:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1012 [QuantizeLinear] outputs: [QuantLinearNode__1012:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1013 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1012:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1010 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1013 [DequantizeLinear] inputs: [QuantLinearNode__1012:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1010 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_18/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_18/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1013 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_18/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_18/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_18/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_18/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_18/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_18/add for ONNX node: StatefulPartitionedCall/model/add_18/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_18/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_18/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_18/add [Add] outputs: [StatefulPartitionedCall/model/add_18/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_38/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_18/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_38/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_18/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_38/Relu for ONNX node: StatefulPartitionedCall/model/activation_38/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_38/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_38/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_38/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_38/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1036 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_38/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1034 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1036 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_38/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1034 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1036:0 for ONNX tensor: QuantLinearNode__1036:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1036 [QuantizeLinear] outputs: [QuantLinearNode__1036:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1037 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1036:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1034 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1037 [DequantizeLinear] inputs: [QuantLinearNode__1036:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1034 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1037 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd/ReadVariableOp__453 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd/ReadVariableOp__453 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp__454 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp_1__455 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp__456 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp_1__457 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp__454 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp_1__455 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp__456 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp_1__457 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_39/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_39/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_39/Relu for ONNX node: StatefulPartitionedCall/model/activation_39/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_39/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_39/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_39/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_39/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1044 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_39/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1042 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1044 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_39/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1042 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1044:0 for ONNX tensor: QuantLinearNode__1044:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1044 [QuantizeLinear] outputs: [QuantLinearNode__1044:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1045 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1044:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1042 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1045 [DequantizeLinear] inputs: [QuantLinearNode__1044:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1042 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1045 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd/ReadVariableOp__463 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd/ReadVariableOp__463 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp__464 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp_1__465 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp__466 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp_1__467 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp__464 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp_1__465 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp__466 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp_1__467 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1032 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_38/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1034 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1032 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_38/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1034 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1032:0 for ONNX tensor: QuantLinearNode__1032:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1032 [QuantizeLinear] outputs: [QuantLinearNode__1032:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1033 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1032:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1034 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1033 [DequantizeLinear] inputs: [QuantLinearNode__1032:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1034 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_19/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_19/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1033 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_19/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_19/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_19/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_19/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_19/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_19/add for ONNX node: StatefulPartitionedCall/model/add_19/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_19/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_19/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_19/add [Add] outputs: [StatefulPartitionedCall/model/add_19/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_40/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_19/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_40/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_19/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_40/Relu for ONNX node: StatefulPartitionedCall/model/activation_40/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_40/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_40/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_40/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_40/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1056 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_40/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1054 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1056 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_40/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1054 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1056:0 for ONNX tensor: QuantLinearNode__1056:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1056 [QuantizeLinear] outputs: [QuantLinearNode__1056:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1057 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1056:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1054 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1057 [DequantizeLinear] inputs: [QuantLinearNode__1056:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1054 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1057 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd/ReadVariableOp__475 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd/ReadVariableOp__475 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp__476 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp_1__477 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp__478 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp_1__479 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp__476 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp_1__477 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp__478 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp_1__479 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_41/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_41/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_41/Relu for ONNX node: StatefulPartitionedCall/model/activation_41/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_41/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_41/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_41/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_41/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1064 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_41/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1062 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1064 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_41/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1062 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1064:0 for ONNX tensor: QuantLinearNode__1064:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1064 [QuantizeLinear] outputs: [QuantLinearNode__1064:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1065 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1064:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1062 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1065 [DequantizeLinear] inputs: [QuantLinearNode__1064:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1062 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1065 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd/ReadVariableOp__485 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd/ReadVariableOp__485 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp__486 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp_1__487 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp__488 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp_1__489 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp__486 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp_1__487 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp__488 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp_1__489 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1052 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_40/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1054 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1052 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_40/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1054 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1052:0 for ONNX tensor: QuantLinearNode__1052:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1052 [QuantizeLinear] outputs: [QuantLinearNode__1052:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1053 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1052:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1054 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1053 [DequantizeLinear] inputs: [QuantLinearNode__1052:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1054 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_20/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_20/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1053 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_20/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_20/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_20/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_20/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_20/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_20/add for ONNX node: StatefulPartitionedCall/model/add_20/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_20/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_20/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_20/add [Add] outputs: [StatefulPartitionedCall/model/add_20/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_42/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_20/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_42/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_20/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_42/Relu for ONNX node: StatefulPartitionedCall/model/activation_42/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_42/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_42/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_42/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_42/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1076 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_42/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1074 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1076 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_42/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1074 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1076:0 for ONNX tensor: QuantLinearNode__1076:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1076 [QuantizeLinear] outputs: [QuantLinearNode__1076:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1077 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1076:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1074 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1077 [DequantizeLinear] inputs: [QuantLinearNode__1076:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1074 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1077 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd/ReadVariableOp__497 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd/ReadVariableOp__497 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp__498 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp_1__499 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp__500 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp_1__501 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp__498 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp_1__499 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp__500 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp_1__501 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_43/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_43/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_43/Relu for ONNX node: StatefulPartitionedCall/model/activation_43/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_43/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_43/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_43/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_43/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1084 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_43/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1082 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1084 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_43/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1082 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1084:0 for ONNX tensor: QuantLinearNode__1084:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1084 [QuantizeLinear] outputs: [QuantLinearNode__1084:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1085 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1084:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1082 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1085 [DequantizeLinear] inputs: [QuantLinearNode__1084:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1082 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1085 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd/ReadVariableOp__507 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd/ReadVariableOp__507 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp__508 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp_1__509 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp__510 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp_1__511 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp__508 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp_1__509 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp__510 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp_1__511 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1072 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_42/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1074 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1072 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_42/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1074 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1072:0 for ONNX tensor: QuantLinearNode__1072:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1072 [QuantizeLinear] outputs: [QuantLinearNode__1072:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1073 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1072:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1074 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1073 [DequantizeLinear] inputs: [QuantLinearNode__1072:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1074 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_21/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_21/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1073 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_21/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_21/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_21/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_21/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_21/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_21/add for ONNX node: StatefulPartitionedCall/model/add_21/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_21/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_21/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_21/add [Add] outputs: [StatefulPartitionedCall/model/add_21/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_44/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_21/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_44/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_21/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_44/Relu for ONNX node: StatefulPartitionedCall/model/activation_44/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_44/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_44/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_44/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_44/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1096 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_44/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1094 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1096 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_44/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1094 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1096:0 for ONNX tensor: QuantLinearNode__1096:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1096 [QuantizeLinear] outputs: [QuantLinearNode__1096:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1097 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1096:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1094 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1097 [DequantizeLinear] inputs: [QuantLinearNode__1096:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1094 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1097 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd/ReadVariableOp__519 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd/ReadVariableOp__519 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp__520 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp_1__521 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp__522 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp_1__523 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp__520 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp_1__521 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp__522 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp_1__523 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_45/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_45/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_45/Relu for ONNX node: StatefulPartitionedCall/model/activation_45/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_45/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_45/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_45/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_45/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1104 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_45/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1102 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1104 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_45/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1102 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1104:0 for ONNX tensor: QuantLinearNode__1104:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1104 [QuantizeLinear] outputs: [QuantLinearNode__1104:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1105 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1104:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1102 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1105 [DequantizeLinear] inputs: [QuantLinearNode__1104:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1102 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1105 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd/ReadVariableOp__529 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd/ReadVariableOp__529 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp__530 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp_1__531 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp__532 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp_1__533 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp__530 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp_1__531 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp__532 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp_1__533 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1092 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_44/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1094 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1092 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_44/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1094 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1092:0 for ONNX tensor: QuantLinearNode__1092:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1092 [QuantizeLinear] outputs: [QuantLinearNode__1092:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1093 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1092:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1094 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1093 [DequantizeLinear] inputs: [QuantLinearNode__1092:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1094 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_22/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_22/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1093 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_22/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_22/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_22/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_22/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_22/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_22/add for ONNX node: StatefulPartitionedCall/model/add_22/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_22/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_22/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_22/add [Add] outputs: [StatefulPartitionedCall/model/add_22/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_46/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_22/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_46/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_22/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_46/Relu for ONNX node: StatefulPartitionedCall/model/activation_46/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_46/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_46/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_46/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_46/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1116 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_46/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1114 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1116 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_46/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1114 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1116:0 for ONNX tensor: QuantLinearNode__1116:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1116 [QuantizeLinear] outputs: [QuantLinearNode__1116:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1117 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1116:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1114 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1117 [DequantizeLinear] inputs: [QuantLinearNode__1116:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1114 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1117 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd/ReadVariableOp__541 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd/ReadVariableOp__541 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp__542 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp_1__543 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp__544 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp_1__545 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp__542 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp_1__543 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp__544 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp_1__545 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_47/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_47/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_47/Relu for ONNX node: StatefulPartitionedCall/model/activation_47/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_47/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_47/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_47/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_47/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1124 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_47/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1122 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1124 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_47/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1122 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1124:0 for ONNX tensor: QuantLinearNode__1124:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1124 [QuantizeLinear] outputs: [QuantLinearNode__1124:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1125 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1124:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1122 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1125 [DequantizeLinear] inputs: [QuantLinearNode__1124:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1122 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1125 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd/ReadVariableOp__551 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd/ReadVariableOp__551 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp__552 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp_1__553 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp__554 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp_1__555 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp__552 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp_1__553 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp__554 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp_1__555 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1112 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_46/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1114 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1112 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_46/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1114 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1112:0 for ONNX tensor: QuantLinearNode__1112:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1112 [QuantizeLinear] outputs: [QuantLinearNode__1112:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1113 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1112:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1114 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1113 [DequantizeLinear] inputs: [QuantLinearNode__1112:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1114 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_23/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_23/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1113 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_23/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_23/add [Add] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_23/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_23/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_23/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_23/add for ONNX node: StatefulPartitionedCall/model/add_23/add [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_23/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_23/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/add_23/add [Add] outputs: [StatefulPartitionedCall/model/add_23/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_48/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_23/add:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_48/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_23/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_48/Relu for ONNX node: StatefulPartitionedCall/model/activation_48/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_48/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_48/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_48/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_48/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1136 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_48/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1130 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1136 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_48/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1130 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1136:0 for ONNX tensor: QuantLinearNode__1136:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1136 [QuantizeLinear] outputs: [QuantLinearNode__1136:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1137 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1136:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1130 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1137 [DequantizeLinear] inputs: [QuantLinearNode__1136:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1130 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1137 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd/ReadVariableOp__563 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd/ReadVariableOp__563 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp__564 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp_1__565 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp__566 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp_1__567 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp__564 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp_1__565 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp__566 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp_1__567 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_49/Relu [Relu] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_49/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_49/Relu for ONNX node: StatefulPartitionedCall/model/activation_49/Relu [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_49/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_49/Relu:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/activation_49/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_49/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1144 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_49/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1142 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1144 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_49/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1142 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: QuantLinearNode__1144:0 for ONNX tensor: QuantLinearNode__1144:0 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1144 [QuantizeLinear] outputs: [QuantLinearNode__1144:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: DequantLinearNode__1145 [DequantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: QuantLinearNode__1144:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1142 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1145 [DequantizeLinear] inputs: [QuantLinearNode__1144:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1142 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] DequantLinearNode__1145 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [Conv] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize_1:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd/ReadVariableOp__573 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd/ReadVariableOp__573 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:20] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp__574 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp_1__575 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp__576 [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp_1__577 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp__574 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp_1__575 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp__576 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp_1__577 -> (64)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 [10/04/2021-21:26:20] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3:0 [10/04/2021-21:26:20] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:20] [V] [TRT] Parsing node: QuantLinearNode__1132 [QuantizeLinear] [10/04/2021-21:26:20] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_48/Relu:0 [10/04/2021-21:26:20] [V] [TRT] Searching for input: quant_scale__1130 [10/04/2021-21:26:20] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:20] [V] [TRT] QuantLinearNode__1132 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_48/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1130 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: QuantLinearNode__1132:0 for ONNX tensor: QuantLinearNode__1132:0 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1132 [QuantizeLinear] outputs: [QuantLinearNode__1132:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: DequantLinearNode__1133 [DequantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: QuantLinearNode__1132:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1130 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1133 [DequantizeLinear] inputs: [QuantLinearNode__1132:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1130 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_24/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_24/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1133 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_24/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_24/add [Add] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_24/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/add_24/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_24/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_24/add for ONNX node: StatefulPartitionedCall/model/add_24/add [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_24/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_24/add:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/add_24/add [Add] outputs: [StatefulPartitionedCall/model/add_24/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_50/Relu [Relu] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_24/add:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/activation_50/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_24/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_50/Relu for ONNX node: StatefulPartitionedCall/model/activation_50/Relu [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_50/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_50/Relu:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/activation_50/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_50/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: QuantLinearNode__1156 [QuantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_50/Relu:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1154 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1156 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_50/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1154 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: QuantLinearNode__1156:0 for ONNX tensor: QuantLinearNode__1156:0 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1156 [QuantizeLinear] outputs: [QuantLinearNode__1156:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: DequantLinearNode__1157 [DequantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: QuantLinearNode__1156:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1154 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1157 [DequantizeLinear] inputs: [QuantLinearNode__1156:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1154 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1157 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [Conv] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize_1:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd/ReadVariableOp__585 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd/ReadVariableOp__585 -> (64)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:21] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp__586 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp_1__587 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp__588 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp_1__589 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp__586 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp_1__587 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp__588 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp_1__589 -> (64)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_51/Relu [Relu] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/activation_51/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_51/Relu for ONNX node: StatefulPartitionedCall/model/activation_51/Relu [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_51/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_51/Relu:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/activation_51/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_51/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: QuantLinearNode__1164 [QuantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_51/Relu:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1162 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1164 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_51/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1162 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: QuantLinearNode__1164:0 for ONNX tensor: QuantLinearNode__1164:0 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1164 [QuantizeLinear] outputs: [QuantLinearNode__1164:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: DequantLinearNode__1165 [DequantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: QuantLinearNode__1164:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1162 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1165 [DequantizeLinear] inputs: [QuantLinearNode__1164:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1162 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1165 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [Conv] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize_1:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd/ReadVariableOp__595 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd/ReadVariableOp__595 -> (64)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:21] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp__596 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp_1__597 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp__598 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp_1__599 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp__596 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp_1__597 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp__598 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp_1__599 -> (64)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: QuantLinearNode__1152 [QuantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_50/Relu:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1154 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1152 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_50/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1154 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: QuantLinearNode__1152:0 for ONNX tensor: QuantLinearNode__1152:0 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1152 [QuantizeLinear] outputs: [QuantLinearNode__1152:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: DequantLinearNode__1153 [DequantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: QuantLinearNode__1152:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1154 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1153 [DequantizeLinear] inputs: [QuantLinearNode__1152:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1154 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_25/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_25/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1153 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_25/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_25/add [Add] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_25/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/add_25/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_25/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_25/add for ONNX node: StatefulPartitionedCall/model/add_25/add [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_25/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_25/add:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/add_25/add [Add] outputs: [StatefulPartitionedCall/model/add_25/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_52/Relu [Relu] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_25/add:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/activation_52/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_25/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_52/Relu for ONNX node: StatefulPartitionedCall/model/activation_52/Relu [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_52/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_52/Relu:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/activation_52/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_52/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: QuantLinearNode__1176 [QuantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_52/Relu:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1174 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1176 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_52/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1174 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: QuantLinearNode__1176:0 for ONNX tensor: QuantLinearNode__1176:0 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1176 [QuantizeLinear] outputs: [QuantLinearNode__1176:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: DequantLinearNode__1177 [DequantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: QuantLinearNode__1176:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1174 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1177 [DequantizeLinear] inputs: [QuantLinearNode__1176:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1174 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1177 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [Conv] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize_1:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd/ReadVariableOp__607 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd/ReadVariableOp__607 -> (64)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:21] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp__608 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp_1__609 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp__610 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp_1__611 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp__608 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp_1__609 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp__610 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp_1__611 -> (64)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_53/Relu [Relu] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/activation_53/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_53/Relu for ONNX node: StatefulPartitionedCall/model/activation_53/Relu [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_53/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_53/Relu:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/activation_53/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_53/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: QuantLinearNode__1184 [QuantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_53/Relu:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1182 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1184 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_53/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1182 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: QuantLinearNode__1184:0 for ONNX tensor: QuantLinearNode__1184:0 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1184 [QuantizeLinear] outputs: [QuantLinearNode__1184:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: DequantLinearNode__1185 [DequantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: QuantLinearNode__1184:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1182 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1185 [DequantizeLinear] inputs: [QuantLinearNode__1184:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1182 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1185 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [Conv] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize_1:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd/ReadVariableOp__617 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd/ReadVariableOp__617 -> (64)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:26:21] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp__618 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp_1__619 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp__620 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp_1__621 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp__618 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp_1__619 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp__620 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp_1__621 -> (64)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: QuantLinearNode__1172 [QuantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_52/Relu:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1174 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1172 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_52/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1174 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: QuantLinearNode__1172:0 for ONNX tensor: QuantLinearNode__1172:0 [10/04/2021-21:26:21] [V] [TRT] QuantLinearNode__1172 [QuantizeLinear] outputs: [QuantLinearNode__1172:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: DequantLinearNode__1173 [DequantizeLinear] [10/04/2021-21:26:21] [V] [TRT] Searching for input: QuantLinearNode__1172:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: quant_scale__1174 [10/04/2021-21:26:21] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1173 [DequantizeLinear] inputs: [QuantLinearNode__1172:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1174 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] DequantLinearNode__1173 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_26/add [Add] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/add_26/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_26/add for ONNX node: StatefulPartitionedCall/model/add_26/add [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_26/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_26/add:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/add_26/add [Add] outputs: [StatefulPartitionedCall/model/add_26/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_54/Relu [Relu] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_26/add:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/activation_54/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_26/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_54/Relu for ONNX node: StatefulPartitionedCall/model/activation_54/Relu [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_54/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_54/Relu:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/activation_54/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_54/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/global_average_pooling2d/Mean [GlobalAveragePool] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_54/Relu:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/global_average_pooling2d/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model/activation_54/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/global_average_pooling2d/Mean for ONNX node: StatefulPartitionedCall/model/global_average_pooling2d/Mean [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/global_average_pooling2d/Mean:0 for ONNX tensor: StatefulPartitionedCall/model/global_average_pooling2d/Mean:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/global_average_pooling2d/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model/global_average_pooling2d/Mean:0 -> (-1, 64, 1, 1)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 [Squeeze] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/global_average_pooling2d/Mean:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: const_axes__2019 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 [Squeeze] inputs: [StatefulPartitionedCall/model/global_average_pooling2d/Mean:0 -> (-1, 64, 1, 1)[FLOAT]], [const_axes__2019 -> (2)[INT32]], [10/04/2021-21:26:21] [V] [TRT] Original shape: (_, 64, 1, 1), squeezing to: (_, _) [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 for ONNX node: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020:0 for ONNX tensor: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 [Squeeze] outputs: [StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020:0 -> (-1, 64)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/dense/MatMul [MatMul] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/dense/MatMul/ReadVariableOp__622 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul [MatMul] inputs: [StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020:0 -> (-1, 64)[FLOAT]], [StatefulPartitionedCall/model/dense/MatMul/ReadVariableOp__622 -> (64, 10)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/dense/MatMul/ReadVariableOp__622 for ONNX node: StatefulPartitionedCall/model/dense/MatMul/ReadVariableOp__622 [10/04/2021-21:26:21] [V] [TRT] GEMM: using FC layer instead of MM because all criteria were met. [10/04/2021-21:26:21] [V] [TRT] Original shape: (_, 64), unsqueezing to: (_, _, _, _) [10/04/2021-21:26:21] [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/dense/MatMul for ONNX node: StatefulPartitionedCall/model/dense/MatMul [10/04/2021-21:26:21] [V] [TRT] Original shape: (_, 10, 1, 1), squeezing to: (_, _) [10/04/2021-21:26:21] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/dense/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model/dense/MatMul:0 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul [MatMul] outputs: [StatefulPartitionedCall/model/dense/MatMul:0 -> (-1, 10)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Parsing node: StatefulPartitionedCall/model/dense/BiasAdd [Add] [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/dense/MatMul:0 [10/04/2021-21:26:21] [V] [TRT] Searching for input: StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model/dense/MatMul:0 -> (-1, 10)[FLOAT]], [StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 -> (10)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 for ONNX node: StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 [10/04/2021-21:26:21] [V] [TRT] Registering layer: StatefulPartitionedCall/model/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Registering tensor: dense_0 for ONNX tensor: dense [10/04/2021-21:26:21] [V] [TRT] StatefulPartitionedCall/model/dense/BiasAdd [Add] outputs: [dense -> (-1, 10)[FLOAT]], [10/04/2021-21:26:21] [V] [TRT] Marking dense_0 as output: dense [10/04/2021-21:26:21] [I] Finish parsing network model [10/04/2021-21:26:21] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 378, GPU 2623 (MiB) [10/04/2021-21:26:21] [10/04/2021-21:26:21] [I] FP32 and INT8 precisions have been specified - more performance might be enabled by additionally specifying --fp16 or --best [10/04/2021-21:26:21] [I] [TRT] [MemUsageSnapshot] Builder begin: CPU 378 MiB, GPU 2623 MiB [10/04/2021-21:26:21] [10/04/2021-21:26:21] [V] [TRT] Applying generic optimizations to the graph for inference. [10/04/2021-21:26:21] [V] [TRT] Original: 1105 layers [10/04/2021-21:26:21] [V] [TRT] After dead-layer removal: 1105 layers [10/04/2021-21:26:21] [V] [TRT] QDQ graph optimizer - constant folding of Q/DQ initializers [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 2) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 10) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 9) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 18) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 17) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 26) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 25) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 34) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 33) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 42) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 41) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 50) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 49) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 58) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 57) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 66) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 65) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 74) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 73) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 82) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 81) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 90) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 89) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 98) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 97) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 106) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 105) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 114) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 113) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 122) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 121) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 130) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 129) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 138) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 137) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 146) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 145) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 153) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 152) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 161) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 160) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 169) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 168) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 177) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 176) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 185) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 184) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 193) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 192) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 201) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 200) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 209) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 208) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 217) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 216) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 225) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 224) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 232) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 231) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 238) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 237) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 244) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 243) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 250) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 249) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 256) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 255) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 262) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 261) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 268) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 267) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 274) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 273) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 280) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 279) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 286) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 285) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 292) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 291) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 298) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 297) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 304) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 303) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 310) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 309) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 316) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 315) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 322) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 321) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 328) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 327) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 334) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 333) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 340) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 339) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 802) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 801) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 808) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 807) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 814) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 813) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 820) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 819) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 826) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 825) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 832) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 831) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 838) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 837) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 844) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 843) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 850) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 849) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 856) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 855) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 349) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 348) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 352) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 351) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 358) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 357) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 374) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 373) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 377) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 376) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 383) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 382) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 399) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 398) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 402) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 401) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 408) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 407) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 424) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 423) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 427) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 426) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 433) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 432) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 449) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 448) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 452) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 451) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 458) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 457) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 474) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 473) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 477) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 476) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 483) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 482) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 499) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 498) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 502) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 501) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 508) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 507) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 524) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 523) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 527) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 526) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 533) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 532) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 549) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 548) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 552) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 551) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 558) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 557) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 574) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 573) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 577) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 576) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 598) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 597) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 601) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 600) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 586) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 585) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 606) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 605) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 609) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 608) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 615) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 614) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 631) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 630) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 634) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 633) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 640) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 639) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 656) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 655) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 659) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 658) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 665) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 664) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 681) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 680) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 684) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 683) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 690) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 689) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 706) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 705) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 709) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 708) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 715) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 714) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 731) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 730) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 734) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 733) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 740) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 739) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 756) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 755) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 759) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 758) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 765) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 764) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 781) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 780) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 784) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 783) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 790) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 789) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 866) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 865) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 869) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 868) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 890) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 889) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 893) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 892) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 878) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 877) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 898) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 897) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 901) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 900) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 907) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 906) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 923) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 922) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 926) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 925) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 932) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 931) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 948) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 947) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 951) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 950) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 957) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 956) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 973) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 972) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 976) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 975) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 982) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 981) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 998) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 997) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1001) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1000) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1007) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1006) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1023) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1022) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1026) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1025) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1032) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1031) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1048) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1047) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1051) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1050) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1057) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1056) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1073) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1072) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1076) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1075) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1082) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1081) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 6) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 5) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 14) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 13) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 22) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 21) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 30) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 29) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 38) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 37) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 46) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 45) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 54) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 53) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 62) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 61) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 70) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 69) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 78) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 77) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 86) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 85) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 94) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 93) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 102) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 101) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 110) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 109) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 118) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 117) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 126) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 125) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 134) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 133) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 142) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 141) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 149) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 148) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 157) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 156) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 165) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 164) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 173) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 172) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 181) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 180) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 189) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 188) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 197) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 196) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 205) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 204) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 213) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 212) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 221) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 220) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 229) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 228) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 235) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 234) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 241) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 240) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 247) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 246) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 253) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 252) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 259) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 258) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 265) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 264) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 271) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 270) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 277) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 276) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 283) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 282) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 289) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 288) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 295) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 294) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 301) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 300) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 307) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 306) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 313) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 312) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 319) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 318) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 325) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 324) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 331) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 330) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 337) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 336) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 343) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 342) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 805) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 804) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 811) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 810) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 817) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 816) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 823) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 822) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 829) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 828) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 835) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 834) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 841) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 840) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 847) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 846) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 853) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 852) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 859) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 858) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 366) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 365) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 369) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 368) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 361) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 360) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 391) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 390) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 394) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 393) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 386) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 385) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 416) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 415) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 419) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 418) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 411) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 410) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 441) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 440) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 444) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 443) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 436) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 435) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 466) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 465) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 469) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 468) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 461) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 460) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 491) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 490) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 494) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 493) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 486) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 485) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 516) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 515) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 519) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 518) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 511) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 510) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 541) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 540) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 544) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 543) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 536) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 535) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 566) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 565) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 569) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 568) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 561) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 560) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 591) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 590) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 594) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 593) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 583) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 582) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 623) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 622) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 626) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 625) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 618) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 617) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 648) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 647) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 651) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 650) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 643) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 642) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 673) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 672) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 676) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 675) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 668) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 667) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 698) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 697) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 701) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 700) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 693) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 692) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 723) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 722) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 726) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 725) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 718) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 717) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 748) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 747) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 751) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 750) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 743) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 742) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 773) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 772) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 776) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 775) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 768) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 767) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 796) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 795) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 799) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 798) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 793) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 792) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 883) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 882) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 886) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 885) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 875) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 874) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 915) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 914) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 918) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 917) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 910) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 909) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 940) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 939) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 943) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 942) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 935) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 934) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 965) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 964) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 968) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 967) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 960) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 959) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 990) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 989) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 993) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 992) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 985) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 984) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1015) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1014) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1018) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1017) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1010) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1009) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1040) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1039) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1043) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1042) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1035) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1034) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1065) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1064) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1068) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1067) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1060) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1059) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1090) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1089) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1093) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1092) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1085) [Constant] [10/04/2021-21:26:21] [V] [TRT] Removing (Unnamed Layer* 1084) [Constant] [10/04/2021-21:26:21] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 with (Unnamed Layer* 1108) [Shuffle] [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 + (Unnamed Layer* 1108) [Shuffle] [10/04/2021-21:26:21] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 with (Unnamed Layer* 1115) [Shuffle] [10/04/2021-21:26:21] [V] [TRT] After Myelin optimization: 538 layers [10/04/2021-21:26:21] [V] [TRT] Convert layer type of StatefulPartitionedCall/model/dense/MatMul from FULLY_CONNECTED to CONVOLUTION [10/04/2021-21:26:21] [V] [TRT] Removing shuffle_between_StatefulPartitionedCall/model/global_average_pooling2d/Mean:0_and_StatefulPartitionedCall/model/dense/MatMul [10/04/2021-21:26:21] [V] [TRT] After scale fusion: 538 layers [10/04/2021-21:26:21] [V] [TRT] QDQ graph optimizer - constant folding of Q/DQ initializers [10/04/2021-21:26:21] [V] [TRT] QDQ graph optimizer forward pass - DQ motions and fusions [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add/add with StatefulPartitionedCall/model/activation_2/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_1/add with StatefulPartitionedCall/model/activation_4/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_2/add with StatefulPartitionedCall/model/activation_6/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_3/add with StatefulPartitionedCall/model/activation_8/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_4/add with StatefulPartitionedCall/model/activation_10/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_5/add with StatefulPartitionedCall/model/activation_12/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_6/add with StatefulPartitionedCall/model/activation_14/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_7/add with StatefulPartitionedCall/model/activation_16/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_8/add with StatefulPartitionedCall/model/activation_18/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_9/add with StatefulPartitionedCall/model/activation_20/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_10/add with StatefulPartitionedCall/model/activation_22/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_11/add with StatefulPartitionedCall/model/activation_24/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_12/add with StatefulPartitionedCall/model/activation_26/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_13/add with StatefulPartitionedCall/model/activation_28/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_14/add with StatefulPartitionedCall/model/activation_30/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_15/add with StatefulPartitionedCall/model/activation_32/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_16/add with StatefulPartitionedCall/model/activation_34/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_17/add with StatefulPartitionedCall/model/activation_36/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_18/add with StatefulPartitionedCall/model/activation_38/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_19/add with StatefulPartitionedCall/model/activation_40/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_20/add with StatefulPartitionedCall/model/activation_42/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_21/add with StatefulPartitionedCall/model/activation_44/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_22/add with StatefulPartitionedCall/model/activation_46/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_23/add with StatefulPartitionedCall/model/activation_48/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_24/add with StatefulPartitionedCall/model/activation_50/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_25/add with StatefulPartitionedCall/model/activation_52/Relu [10/04/2021-21:26:21] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_26/add with StatefulPartitionedCall/model/activation_54/Relu [10/04/2021-21:26:21] [V] [TRT] Swap the layer type of StatefulPartitionedCall/model/global_average_pooling2d/Mean from REDUCE to POOLING [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 with QuantLinearNode__992_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 with QuantLinearNode__980_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 with QuantLinearNode__972_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 with QuantLinearNode__960_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 with QuantLinearNode__952_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 with QuantLinearNode__940_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 with QuantLinearNode__932_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 with QuantLinearNode__920_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 with QuantLinearNode__912_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 with QuantLinearNode__900_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 with QuantLinearNode__892_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 with QuantLinearNode__880_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 with QuantLinearNode__872_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 with QuantLinearNode__860_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 with QuantLinearNode__852_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 with QuantLinearNode__840_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 with QuantLinearNode__832_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 with QuantLinearNode__820_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 with QuantLinearNode__812_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 with QuantLinearNode__804_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 with QuantLinearNode__792_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 with QuantLinearNode__784_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 with QuantLinearNode__772_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 with QuantLinearNode__764_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 with QuantLinearNode__752_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 with QuantLinearNode__744_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 with QuantLinearNode__732_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 with QuantLinearNode__724_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 with QuantLinearNode__712_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 with QuantLinearNode__704_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 with QuantLinearNode__692_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 with QuantLinearNode__684_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 with QuantLinearNode__672_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 with QuantLinearNode__664_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 with QuantLinearNode__652_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 with QuantLinearNode__644_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d/transpose__8 with QuantLinearNode__632_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 with QuantLinearNode__1188_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 with QuantLinearNode__1180_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 with QuantLinearNode__1168_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 with QuantLinearNode__1160_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 with QuantLinearNode__1148_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 with QuantLinearNode__1140_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 with QuantLinearNode__1128_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 with QuantLinearNode__1120_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 with QuantLinearNode__1108_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 with QuantLinearNode__1100_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 with QuantLinearNode__1088_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 with QuantLinearNode__1080_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 with QuantLinearNode__1068_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 with QuantLinearNode__1060_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 with QuantLinearNode__1048_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 with QuantLinearNode__1040_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 with QuantLinearNode__1028_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 with QuantLinearNode__1020_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 with QuantLinearNode__1008_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 with QuantLinearNode__1000_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_1/Relu with QuantLinearNode__648_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_3/Relu with QuantLinearNode__668_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_5/Relu with QuantLinearNode__688_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_7/Relu with QuantLinearNode__708_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_9/Relu with QuantLinearNode__728_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_11/Relu with QuantLinearNode__748_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_13/Relu with QuantLinearNode__768_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_15/Relu with QuantLinearNode__788_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_17/Relu with QuantLinearNode__808_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_19/Relu with QuantLinearNode__836_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_21/Relu with QuantLinearNode__856_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_23/Relu with QuantLinearNode__876_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_25/Relu with QuantLinearNode__896_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_27/Relu with QuantLinearNode__916_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_29/Relu with QuantLinearNode__936_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_31/Relu with QuantLinearNode__956_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_33/Relu with QuantLinearNode__976_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_35/Relu with QuantLinearNode__996_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_37/Relu with QuantLinearNode__1024_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_39/Relu with QuantLinearNode__1044_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_41/Relu with QuantLinearNode__1064_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_43/Relu with QuantLinearNode__1084_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_45/Relu with QuantLinearNode__1104_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_47/Relu with QuantLinearNode__1124_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_49/Relu with QuantLinearNode__1144_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_51/Relu with QuantLinearNode__1164_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_53/Relu with QuantLinearNode__1184_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__636_quantize_scale_node which duplicates (Q) QuantLinearNode__640_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__636_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__656_quantize_scale_node which duplicates (Q) QuantLinearNode__660_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__656_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__676_quantize_scale_node which duplicates (Q) QuantLinearNode__680_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__676_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__696_quantize_scale_node which duplicates (Q) QuantLinearNode__700_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__696_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__716_quantize_scale_node which duplicates (Q) QuantLinearNode__720_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__716_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__736_quantize_scale_node which duplicates (Q) QuantLinearNode__740_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__736_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__756_quantize_scale_node which duplicates (Q) QuantLinearNode__760_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__756_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__776_quantize_scale_node which duplicates (Q) QuantLinearNode__780_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__776_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__796_quantize_scale_node which duplicates (Q) QuantLinearNode__800_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__796_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__816_quantize_scale_node which duplicates (Q) QuantLinearNode__828_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__816_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__844_quantize_scale_node which duplicates (Q) QuantLinearNode__848_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__844_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__864_quantize_scale_node which duplicates (Q) QuantLinearNode__868_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__864_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__884_quantize_scale_node which duplicates (Q) QuantLinearNode__888_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__884_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__904_quantize_scale_node which duplicates (Q) QuantLinearNode__908_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__904_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__924_quantize_scale_node which duplicates (Q) QuantLinearNode__928_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__924_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__944_quantize_scale_node which duplicates (Q) QuantLinearNode__948_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__944_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__964_quantize_scale_node which duplicates (Q) QuantLinearNode__968_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__964_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__984_quantize_scale_node which duplicates (Q) QuantLinearNode__988_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__984_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__1004_quantize_scale_node which duplicates (Q) QuantLinearNode__1016_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1004_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__1032_quantize_scale_node which duplicates (Q) QuantLinearNode__1036_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1032_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__1052_quantize_scale_node which duplicates (Q) QuantLinearNode__1056_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1052_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__1072_quantize_scale_node which duplicates (Q) QuantLinearNode__1076_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1072_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__1092_quantize_scale_node which duplicates (Q) QuantLinearNode__1096_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1092_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__1112_quantize_scale_node which duplicates (Q) QuantLinearNode__1116_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1112_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__1132_quantize_scale_node which duplicates (Q) QuantLinearNode__1136_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1132_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__1152_quantize_scale_node which duplicates (Q) QuantLinearNode__1156_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1152_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Eliminating QuantLinearNode__1172_quantize_scale_node which duplicates (Q) QuantLinearNode__1176_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1172_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/activation/Relu with QuantLinearNode__640_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QDQ graph optimizer quantization pass - Generate quantized ops [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu with QuantLinearNode__660_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__660_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__649_quantize_scale_node and DequantLinearNode__653_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__660_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__649_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__653_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd with StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__637_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_1/add + StatefulPartitionedCall/model/activation_4/Relu with QuantLinearNode__680_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__680_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__669_quantize_scale_node and DequantLinearNode__673_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__680_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__669_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__673_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 + QuantLinearNode__672_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 + QuantLinearNode__672_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd with StatefulPartitionedCall/model/add_1/add + StatefulPartitionedCall/model/activation_4/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__657_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_2/add + StatefulPartitionedCall/model/activation_6/Relu with QuantLinearNode__700_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__700_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__689_quantize_scale_node and DequantLinearNode__693_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__700_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__689_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__693_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 + QuantLinearNode__692_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 + QuantLinearNode__692_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd with StatefulPartitionedCall/model/add_2/add + StatefulPartitionedCall/model/activation_6/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__677_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_3/add + StatefulPartitionedCall/model/activation_8/Relu with QuantLinearNode__720_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__720_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__709_quantize_scale_node and DequantLinearNode__713_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__720_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__709_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__713_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 + QuantLinearNode__712_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 + QuantLinearNode__712_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd with StatefulPartitionedCall/model/add_3/add + StatefulPartitionedCall/model/activation_8/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__697_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_4/add + StatefulPartitionedCall/model/activation_10/Relu with QuantLinearNode__740_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__740_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__729_quantize_scale_node and DequantLinearNode__733_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__740_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__729_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__733_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 + QuantLinearNode__732_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 + QuantLinearNode__732_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd with StatefulPartitionedCall/model/add_4/add + StatefulPartitionedCall/model/activation_10/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__717_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_5/add + StatefulPartitionedCall/model/activation_12/Relu with QuantLinearNode__760_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__760_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__749_quantize_scale_node and DequantLinearNode__753_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__760_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__749_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__753_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 + QuantLinearNode__752_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 + QuantLinearNode__752_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd with StatefulPartitionedCall/model/add_5/add + StatefulPartitionedCall/model/activation_12/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__737_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_6/add + StatefulPartitionedCall/model/activation_14/Relu with QuantLinearNode__780_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__780_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__769_quantize_scale_node and DequantLinearNode__773_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__780_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__769_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__773_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 + QuantLinearNode__772_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 + QuantLinearNode__772_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd with StatefulPartitionedCall/model/add_6/add + StatefulPartitionedCall/model/activation_14/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__757_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_7/add + StatefulPartitionedCall/model/activation_16/Relu with QuantLinearNode__800_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__800_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__789_quantize_scale_node and DequantLinearNode__793_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__800_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__789_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__793_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 + QuantLinearNode__792_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 + QuantLinearNode__792_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd with StatefulPartitionedCall/model/add_7/add + StatefulPartitionedCall/model/activation_16/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__777_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_8/add + StatefulPartitionedCall/model/activation_18/Relu with QuantLinearNode__828_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__828_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__809_quantize_scale_node and DequantLinearNode__813_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__828_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__809_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__813_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 + QuantLinearNode__812_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 + QuantLinearNode__812_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd with StatefulPartitionedCall/model/add_8/add + StatefulPartitionedCall/model/activation_18/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__797_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu with QuantLinearNode__848_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__848_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__837_quantize_scale_node and DequantLinearNode__841_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__848_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__837_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__841_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd with StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__825_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_10/add + StatefulPartitionedCall/model/activation_22/Relu with QuantLinearNode__868_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__868_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__857_quantize_scale_node and DequantLinearNode__861_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__868_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__857_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__861_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 + QuantLinearNode__860_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 + QuantLinearNode__860_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd with StatefulPartitionedCall/model/add_10/add + StatefulPartitionedCall/model/activation_22/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__845_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_11/add + StatefulPartitionedCall/model/activation_24/Relu with QuantLinearNode__888_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__888_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__877_quantize_scale_node and DequantLinearNode__881_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__888_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__877_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__881_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 + QuantLinearNode__880_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 + QuantLinearNode__880_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd with StatefulPartitionedCall/model/add_11/add + StatefulPartitionedCall/model/activation_24/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__865_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_12/add + StatefulPartitionedCall/model/activation_26/Relu with QuantLinearNode__908_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__908_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__897_quantize_scale_node and DequantLinearNode__901_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__908_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__897_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__901_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 + QuantLinearNode__900_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 + QuantLinearNode__900_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd with StatefulPartitionedCall/model/add_12/add + StatefulPartitionedCall/model/activation_26/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__885_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_13/add + StatefulPartitionedCall/model/activation_28/Relu with QuantLinearNode__928_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__928_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__917_quantize_scale_node and DequantLinearNode__921_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__928_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__917_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__921_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 + QuantLinearNode__920_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 + QuantLinearNode__920_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd with StatefulPartitionedCall/model/add_13/add + StatefulPartitionedCall/model/activation_28/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__905_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_14/add + StatefulPartitionedCall/model/activation_30/Relu with QuantLinearNode__948_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__948_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__937_quantize_scale_node and DequantLinearNode__941_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__948_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__937_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__941_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 + QuantLinearNode__940_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 + QuantLinearNode__940_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd with StatefulPartitionedCall/model/add_14/add + StatefulPartitionedCall/model/activation_30/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__925_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_15/add + StatefulPartitionedCall/model/activation_32/Relu with QuantLinearNode__968_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__968_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__957_quantize_scale_node and DequantLinearNode__961_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__968_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__957_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__961_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 + QuantLinearNode__960_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 + QuantLinearNode__960_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd with StatefulPartitionedCall/model/add_15/add + StatefulPartitionedCall/model/activation_32/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__945_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_16/add + StatefulPartitionedCall/model/activation_34/Relu with QuantLinearNode__988_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__988_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__977_quantize_scale_node and DequantLinearNode__981_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__988_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__977_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__981_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 + QuantLinearNode__980_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 + QuantLinearNode__980_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd with StatefulPartitionedCall/model/add_16/add + StatefulPartitionedCall/model/activation_34/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__965_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_17/add + StatefulPartitionedCall/model/activation_36/Relu with QuantLinearNode__1016_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1016_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__997_quantize_scale_node and DequantLinearNode__1001_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1016_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__997_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1001_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 + QuantLinearNode__1000_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 + QuantLinearNode__1000_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd with StatefulPartitionedCall/model/add_17/add + StatefulPartitionedCall/model/activation_36/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__985_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu with QuantLinearNode__1036_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1036_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1025_quantize_scale_node and DequantLinearNode__1029_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1036_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1025_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1029_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd with StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1013_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_19/add + StatefulPartitionedCall/model/activation_40/Relu with QuantLinearNode__1056_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1056_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1045_quantize_scale_node and DequantLinearNode__1049_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1056_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1045_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1049_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 + QuantLinearNode__1048_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 + QuantLinearNode__1048_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd with StatefulPartitionedCall/model/add_19/add + StatefulPartitionedCall/model/activation_40/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1033_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_20/add + StatefulPartitionedCall/model/activation_42/Relu with QuantLinearNode__1076_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1076_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1065_quantize_scale_node and DequantLinearNode__1069_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1076_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1065_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1069_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 + QuantLinearNode__1068_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 + QuantLinearNode__1068_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd with StatefulPartitionedCall/model/add_20/add + StatefulPartitionedCall/model/activation_42/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1053_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_21/add + StatefulPartitionedCall/model/activation_44/Relu with QuantLinearNode__1096_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1096_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1085_quantize_scale_node and DequantLinearNode__1089_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1096_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1085_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1089_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 + QuantLinearNode__1088_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 + QuantLinearNode__1088_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd with StatefulPartitionedCall/model/add_21/add + StatefulPartitionedCall/model/activation_44/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1073_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_22/add + StatefulPartitionedCall/model/activation_46/Relu with QuantLinearNode__1116_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1116_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1105_quantize_scale_node and DequantLinearNode__1109_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1116_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1105_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1109_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 + QuantLinearNode__1108_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 + QuantLinearNode__1108_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd with StatefulPartitionedCall/model/add_22/add + StatefulPartitionedCall/model/activation_46/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1093_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_23/add + StatefulPartitionedCall/model/activation_48/Relu with QuantLinearNode__1136_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1136_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1125_quantize_scale_node and DequantLinearNode__1129_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1136_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1125_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1129_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 + QuantLinearNode__1128_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 + QuantLinearNode__1128_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd with StatefulPartitionedCall/model/add_23/add + StatefulPartitionedCall/model/activation_48/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1113_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_24/add + StatefulPartitionedCall/model/activation_50/Relu with QuantLinearNode__1156_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1156_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1145_quantize_scale_node and DequantLinearNode__1149_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1156_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1145_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1149_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 + QuantLinearNode__1148_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 + QuantLinearNode__1148_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd with StatefulPartitionedCall/model/add_24/add + StatefulPartitionedCall/model/activation_50/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1133_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Swapping StatefulPartitionedCall/model/add_25/add + StatefulPartitionedCall/model/activation_52/Relu with QuantLinearNode__1176_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1176_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1165_quantize_scale_node and DequantLinearNode__1169_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1176_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1165_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1169_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 + QuantLinearNode__1168_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 + QuantLinearNode__1168_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd with StatefulPartitionedCall/model/add_25/add + StatefulPartitionedCall/model/activation_52/Relu [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1153_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__640_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__629_quantize_scale_node and DequantLinearNode__633_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__640_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__629_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__633_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__648_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__641_quantize_scale_node and DequantLinearNode__645_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__648_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__641_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__645_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__668_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__661_quantize_scale_node and DequantLinearNode__665_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__668_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__661_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__665_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__688_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__681_quantize_scale_node and DequantLinearNode__685_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__688_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__681_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__685_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__708_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__701_quantize_scale_node and DequantLinearNode__705_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__708_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__701_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__705_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__728_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__721_quantize_scale_node and DequantLinearNode__725_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__728_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__721_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__725_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__748_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__741_quantize_scale_node and DequantLinearNode__745_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__748_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__741_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__745_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__768_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__761_quantize_scale_node and DequantLinearNode__765_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__768_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__761_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__765_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__788_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__781_quantize_scale_node and DequantLinearNode__785_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__788_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__781_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__785_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__808_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__801_quantize_scale_node and DequantLinearNode__805_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__808_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__801_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__805_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__836_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__829_quantize_scale_node and DequantLinearNode__833_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__836_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__829_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__833_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__856_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__849_quantize_scale_node and DequantLinearNode__853_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__856_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__849_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__853_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__876_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__869_quantize_scale_node and DequantLinearNode__873_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__876_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__869_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__873_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__896_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__889_quantize_scale_node and DequantLinearNode__893_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__896_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__889_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__893_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__916_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__909_quantize_scale_node and DequantLinearNode__913_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__916_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__909_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__913_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__936_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__929_quantize_scale_node and DequantLinearNode__933_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__936_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__929_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__933_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__956_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__949_quantize_scale_node and DequantLinearNode__953_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__956_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__949_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__953_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__976_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__969_quantize_scale_node and DequantLinearNode__973_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__976_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__969_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__973_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__996_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__989_quantize_scale_node and DequantLinearNode__993_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__996_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__989_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__993_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1024_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1017_quantize_scale_node and DequantLinearNode__1021_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1024_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1017_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1021_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1044_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1037_quantize_scale_node and DequantLinearNode__1041_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1044_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1037_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1041_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1064_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1057_quantize_scale_node and DequantLinearNode__1061_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1064_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1057_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1061_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1084_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1077_quantize_scale_node and DequantLinearNode__1081_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1084_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1077_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1081_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1104_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1097_quantize_scale_node and DequantLinearNode__1101_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1104_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1097_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1101_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1124_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1117_quantize_scale_node and DequantLinearNode__1121_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1124_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1117_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1121_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1144_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1137_quantize_scale_node and DequantLinearNode__1141_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1144_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1137_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1141_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1164_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1157_quantize_scale_node and DequantLinearNode__1161_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1164_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1157_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1161_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1184_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1177_quantize_scale_node and DequantLinearNode__1181_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1184_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1177_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1181_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1185_quantize_scale_node and DequantLinearNode__1189_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1185_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1189_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__824_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__817_quantize_scale_node and DequantLinearNode__821_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__824_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__817_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__821_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1012_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [10/04/2021-21:26:21] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1005_quantize_scale_node and DequantLinearNode__1009_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [10/04/2021-21:26:21] [V] [TRT] Removing QuantLinearNode__1012_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1005_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] Removing DequantLinearNode__1009_quantize_scale_node [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 + QuantLinearNode__992_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 + QuantLinearNode__972_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 + QuantLinearNode__952_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 + QuantLinearNode__932_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 + QuantLinearNode__912_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 + QuantLinearNode__892_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 + QuantLinearNode__872_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 + QuantLinearNode__804_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 + QuantLinearNode__784_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 + QuantLinearNode__764_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 + QuantLinearNode__744_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 + QuantLinearNode__724_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 + QuantLinearNode__704_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 + QuantLinearNode__684_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 + QuantLinearNode__664_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 + QuantLinearNode__1180_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 + QuantLinearNode__1160_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 + QuantLinearNode__1140_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 + QuantLinearNode__1120_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 + QuantLinearNode__1100_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 + QuantLinearNode__1080_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 + QuantLinearNode__1060_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd with StatefulPartitionedCall/model/activation/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd with StatefulPartitionedCall/model/activation_1/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 + QuantLinearNode__664_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd with StatefulPartitionedCall/model/activation_3/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 + QuantLinearNode__684_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd with StatefulPartitionedCall/model/activation_5/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 + QuantLinearNode__704_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd with StatefulPartitionedCall/model/activation_7/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 + QuantLinearNode__724_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd with StatefulPartitionedCall/model/activation_9/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 + QuantLinearNode__744_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd with StatefulPartitionedCall/model/activation_11/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 + QuantLinearNode__764_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd with StatefulPartitionedCall/model/activation_13/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 + QuantLinearNode__784_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd with StatefulPartitionedCall/model/activation_15/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 + QuantLinearNode__804_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd with StatefulPartitionedCall/model/activation_17/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd with StatefulPartitionedCall/model/activation_19/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd with StatefulPartitionedCall/model/activation_21/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 + QuantLinearNode__872_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd with StatefulPartitionedCall/model/activation_23/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 + QuantLinearNode__892_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd with StatefulPartitionedCall/model/activation_25/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 + QuantLinearNode__912_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd with StatefulPartitionedCall/model/activation_27/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 + QuantLinearNode__932_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd with StatefulPartitionedCall/model/activation_29/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 + QuantLinearNode__952_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd with StatefulPartitionedCall/model/activation_31/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 + QuantLinearNode__972_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd with StatefulPartitionedCall/model/activation_33/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 + QuantLinearNode__992_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd with StatefulPartitionedCall/model/activation_35/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd with StatefulPartitionedCall/model/activation_37/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd with StatefulPartitionedCall/model/activation_39/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 + QuantLinearNode__1060_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd with StatefulPartitionedCall/model/activation_41/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 + QuantLinearNode__1080_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd with StatefulPartitionedCall/model/activation_43/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 + QuantLinearNode__1100_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd with StatefulPartitionedCall/model/activation_45/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 + QuantLinearNode__1120_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd with StatefulPartitionedCall/model/activation_47/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 + QuantLinearNode__1140_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd with StatefulPartitionedCall/model/activation_49/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 + QuantLinearNode__1160_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd with StatefulPartitionedCall/model/activation_51/Relu [10/04/2021-21:26:21] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 + QuantLinearNode__1180_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd with StatefulPartitionedCall/model/activation_53/Relu [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd with StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu [10/04/2021-21:26:21] [V] [TRT] -----------SqueezePushDown kSQUEEZE_JOIN case: StatefulPartitionedCall/model/dense/MatMul --> (Unnamed Layer* 1113) [Shuffle] --> StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:21] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] with unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] [10/04/2021-21:26:21] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/dense/MatMul with StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:21] [V] [TRT] After vertical fusions: 63 layers [10/04/2021-21:26:21] [V] [TRT] After dupe layer removal: 62 layers [10/04/2021-21:26:21] [V] [TRT] After final dead-layer removal: 62 layers [10/04/2021-21:26:21] [V] [TRT] After tensor merging: 62 layers [10/04/2021-21:26:21] [V] [TRT] After concat removal: 62 layers [10/04/2021-21:26:21] [V] [TRT] Graph construction and optimization completed in 0.591617 seconds. [10/04/2021-21:26:21] [I] [TRT] ---------- Layers Running on DLA ---------- [10/04/2021-21:26:21] [I] [TRT] ---------- Layers Running on GPU ---------- [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] QuantLinearNode__628_quantize_scale_node [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 + QuantLinearNode__664_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd + StatefulPartitionedCall/model/activation_3/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 + QuantLinearNode__672_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd + StatefulPartitionedCall/model/add_1/add + StatefulPartitionedCall/model/activation_4/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 + QuantLinearNode__684_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd + StatefulPartitionedCall/model/activation_5/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 + QuantLinearNode__692_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd + StatefulPartitionedCall/model/add_2/add + StatefulPartitionedCall/model/activation_6/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 + QuantLinearNode__704_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd + StatefulPartitionedCall/model/activation_7/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 + QuantLinearNode__712_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd + StatefulPartitionedCall/model/add_3/add + StatefulPartitionedCall/model/activation_8/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 + QuantLinearNode__724_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd + StatefulPartitionedCall/model/activation_9/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 + QuantLinearNode__732_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd + StatefulPartitionedCall/model/add_4/add + StatefulPartitionedCall/model/activation_10/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 + QuantLinearNode__744_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd + StatefulPartitionedCall/model/activation_11/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 + QuantLinearNode__752_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd + StatefulPartitionedCall/model/add_5/add + StatefulPartitionedCall/model/activation_12/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 + QuantLinearNode__764_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd + StatefulPartitionedCall/model/activation_13/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 + QuantLinearNode__772_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd + StatefulPartitionedCall/model/add_6/add + StatefulPartitionedCall/model/activation_14/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 + QuantLinearNode__784_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd + StatefulPartitionedCall/model/activation_15/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 + QuantLinearNode__792_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd + StatefulPartitionedCall/model/add_7/add + StatefulPartitionedCall/model/activation_16/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 + QuantLinearNode__804_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd + StatefulPartitionedCall/model/activation_17/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 + QuantLinearNode__812_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd + StatefulPartitionedCall/model/add_8/add + StatefulPartitionedCall/model/activation_18/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 + QuantLinearNode__860_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd + StatefulPartitionedCall/model/add_10/add + StatefulPartitionedCall/model/activation_22/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 + QuantLinearNode__872_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd + StatefulPartitionedCall/model/activation_23/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 + QuantLinearNode__880_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd + StatefulPartitionedCall/model/add_11/add + StatefulPartitionedCall/model/activation_24/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 + QuantLinearNode__892_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd + StatefulPartitionedCall/model/activation_25/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 + QuantLinearNode__900_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd + StatefulPartitionedCall/model/add_12/add + StatefulPartitionedCall/model/activation_26/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 + QuantLinearNode__912_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd + StatefulPartitionedCall/model/activation_27/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 + QuantLinearNode__920_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd + StatefulPartitionedCall/model/add_13/add + StatefulPartitionedCall/model/activation_28/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 + QuantLinearNode__932_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd + StatefulPartitionedCall/model/activation_29/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 + QuantLinearNode__940_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd + StatefulPartitionedCall/model/add_14/add + StatefulPartitionedCall/model/activation_30/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 + QuantLinearNode__952_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd + StatefulPartitionedCall/model/activation_31/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 + QuantLinearNode__960_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd + StatefulPartitionedCall/model/add_15/add + StatefulPartitionedCall/model/activation_32/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 + QuantLinearNode__972_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd + StatefulPartitionedCall/model/activation_33/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 + QuantLinearNode__980_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd + StatefulPartitionedCall/model/add_16/add + StatefulPartitionedCall/model/activation_34/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 + QuantLinearNode__992_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd + StatefulPartitionedCall/model/activation_35/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 + QuantLinearNode__1000_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd + StatefulPartitionedCall/model/add_17/add + StatefulPartitionedCall/model/activation_36/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 + QuantLinearNode__1048_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd + StatefulPartitionedCall/model/add_19/add + StatefulPartitionedCall/model/activation_40/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 + QuantLinearNode__1060_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd + StatefulPartitionedCall/model/activation_41/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 + QuantLinearNode__1068_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd + StatefulPartitionedCall/model/add_20/add + StatefulPartitionedCall/model/activation_42/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 + QuantLinearNode__1080_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd + StatefulPartitionedCall/model/activation_43/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 + QuantLinearNode__1088_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd + StatefulPartitionedCall/model/add_21/add + StatefulPartitionedCall/model/activation_44/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 + QuantLinearNode__1100_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd + StatefulPartitionedCall/model/activation_45/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 + QuantLinearNode__1108_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd + StatefulPartitionedCall/model/add_22/add + StatefulPartitionedCall/model/activation_46/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 + QuantLinearNode__1120_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd + StatefulPartitionedCall/model/activation_47/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 + QuantLinearNode__1128_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd + StatefulPartitionedCall/model/add_23/add + StatefulPartitionedCall/model/activation_48/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 + QuantLinearNode__1140_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd + StatefulPartitionedCall/model/activation_49/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 + QuantLinearNode__1148_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd + StatefulPartitionedCall/model/add_24/add + StatefulPartitionedCall/model/activation_50/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 + QuantLinearNode__1160_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd + StatefulPartitionedCall/model/activation_51/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 + QuantLinearNode__1168_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd + StatefulPartitionedCall/model/add_25/add + StatefulPartitionedCall/model/activation_52/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] DequantLinearNode__1173_quantize_scale_node [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 + QuantLinearNode__1180_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd + StatefulPartitionedCall/model/activation_53/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/global_average_pooling2d/Mean [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:21] [I] [TRT] [GpuLayer] copied_squeeze_after_StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:22] [V] [TRT] Using cublas a tactic source [10/04/2021-21:26:22] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +226, GPU +226, now: CPU 608, GPU 2849 (MiB) [10/04/2021-21:26:22] [V] [TRT] Using cuDNN as a tactic source [10/04/2021-21:26:23] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +308, GPU +311, now: CPU 916, GPU 3160 (MiB) [10/04/2021-21:26:23] [10/04/2021-21:26:23] [V] [TRT] Constructing optimization profile number 0 [1/1]. [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning format combination: Float(3072,1024,32,1) -> Int8(3072,1024,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: QuantLinearNode__628_quantize_scale_node (Scale) [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for QuantLinearNode__628_quantize_scale_node [10/04/2021-21:26:23] [V] [TRT] Tactic: 0 Time: 0.021268 [10/04/2021-21:26:23] [V] [TRT] Fastest Tactic: 0 Time: 0.021268 [10/04/2021-21:26:23] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0 [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning format combination: Float(3072,1024,32,1) -> Int8(1024,1024:4,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: QuantLinearNode__628_quantize_scale_node (Scale) [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for QuantLinearNode__628_quantize_scale_node [10/04/2021-21:26:23] [V] [TRT] Tactic: 0 Time: 0.022512 [10/04/2021-21:26:23] [V] [TRT] Fastest Tactic: 0 Time: 0.022512 [10/04/2021-21:26:23] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0 [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning format combination: Float(3072,1024,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: QuantLinearNode__628_quantize_scale_node (Scale) [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for QuantLinearNode__628_quantize_scale_node [10/04/2021-21:26:23] [V] [TRT] Tactic: 0 Time: 0.024076 [10/04/2021-21:26:23] [V] [TRT] Fastest Tactic: 0 Time: 0.024076 [10/04/2021-21:26:23] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0 [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning Reformat:Int8(3072,1024,32,1) -> Int8(1024,1024:4,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 1002 Time: 0.030964 [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 0 Time: 0.013564 [10/04/2021-21:26:23] [V] [TRT] Fastest Tactic: 0 Time: 0.013564 [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning Reformat:Int8(3072,1024,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 1002 Time: 0.027908 [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 0 Time: 0.020856 [10/04/2021-21:26:23] [V] [TRT] Fastest Tactic: 0 Time: 0.020856 [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:4,32,1) -> Int8(3072,1024,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 1002 Time: 0.028176 [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 0 Time: 0.018556 [10/04/2021-21:26:23] [V] [TRT] Fastest Tactic: 0 Time: 0.018556 [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 1002 Time: 0.032888 [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 0 Time: 0.013692 [10/04/2021-21:26:23] [V] [TRT] Fastest Tactic: 0 Time: 0.013692 [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(3072,1024,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 1002 Time: 0.037768 [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 0 Time: 0.018432 [10/04/2021-21:26:23] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(1024,1024:4,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 1002 Time: 0.032648 [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 0 Time: 0.013548 [10/04/2021-21:26:23] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:23] [V] [TRT] Tactic: 1 Time: 0.013556 [10/04/2021-21:26:23] [V] [TRT] Fastest Tactic: 0 Time: 0.013548 [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning format combination: Int8(3072,1024,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (CaskConvolution) [10/04/2021-21:26:23] [V] [TRT] CaskConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:23] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (FusedConvActConvolution) [10/04/2021-21:26:23] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (CaskConvolution) [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 4438325421691896755 [10/04/2021-21:26:23] [V] [TRT] Tactic: 4438325421691896755 Time: 0.040076 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: 4581732244273465060 [10/04/2021-21:26:23] [V] [TRT] Tactic: 4581732244273465060 Time: 0.038408 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 4934335053031119367 [10/04/2021-21:26:23] [V] [TRT] Tactic: 4934335053031119367 Time: 0.043384 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6797040896965118050 [10/04/2021-21:26:23] [V] [TRT] Tactic: 6797040896965118050 Time: 0.048396 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 8006952294591770973 [10/04/2021-21:26:23] [V] [TRT] Tactic: 8006952294591770973 Time: 0.038784 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -7210942453088153035 [10/04/2021-21:26:23] [V] [TRT] Tactic: -7210942453088153035 Time: 0.048244 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: -6282183216199417697 [10/04/2021-21:26:23] [V] [TRT] Tactic: -6282183216199417697 Time: 0.038276 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_nn_v1 Tactic: -5026383765466876607 [10/04/2021-21:26:23] [V] [TRT] Tactic: -5026383765466876607 Time: 0.048636 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: -5016725782072253841 [10/04/2021-21:26:23] [V] [TRT] Tactic: -5016725782072253841 Time: 0.041604 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_nn_v1 Tactic: -1370999262391786833 [10/04/2021-21:26:23] [V] [TRT] Tactic: -1370999262391786833 Time: 0.043544 [10/04/2021-21:26:23] [V] [TRT] Fastest Tactic: -6282183216199417697 Time: 0.038276 [10/04/2021-21:26:23] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6282183216199417697 [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (CaskConvolution) [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_nn_v1 Tactic: 1213457772632185722 [10/04/2021-21:26:23] [V] [TRT] Tactic: 1213457772632185722 Time: 0.04198 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_nn_v1 Tactic: 1713441381477652893 [10/04/2021-21:26:23] [V] [TRT] Tactic: 1713441381477652893 Time: 0.039024 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_c32_nn_v1 Tactic: 7125598890155666458 [10/04/2021-21:26:23] [V] [TRT] Tactic: 7125598890155666458 Time: 0.043016 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_nn_v1 Tactic: 8047041638267142825 [10/04/2021-21:26:23] [V] [TRT] Tactic: 8047041638267142825 Time: 0.037004 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_nn_v1 Tactic: -7846982807478255793 [10/04/2021-21:26:23] [V] [TRT] Tactic: -7846982807478255793 Time: 0.038396 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_nn_v1 Tactic: -6459719113600909000 [10/04/2021-21:26:23] [V] [TRT] Tactic: -6459719113600909000 Time: 0.038144 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_nn_v1 Tactic: -4573925292554651334 [10/04/2021-21:26:23] [V] [TRT] Tactic: -4573925292554651334 Time: 0.04736 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_nn_v1 Tactic: -3566249366964946311 [10/04/2021-21:26:23] [V] [TRT] Tactic: -3566249366964946311 Time: 0.040572 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_nn_v1 Tactic: -2002418013575043687 [10/04/2021-21:26:23] [V] [TRT] Tactic: -2002418013575043687 Time: 0.048152 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_c32_nn_v1 Tactic: -1659631603542281459 [10/04/2021-21:26:23] [V] [TRT] Tactic: -1659631603542281459 Time: 0.048524 [10/04/2021-21:26:23] [V] [TRT] Fastest Tactic: 8047041638267142825 Time: 0.037004 [10/04/2021-21:26:23] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 8047041638267142825 [10/04/2021-21:26:23] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:23] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (FusedConvActConvolution) [10/04/2021-21:26:23] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (CudaGroupConvolution) [10/04/2021-21:26:23] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (CaskConvolution) [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 66319348402778770 [10/04/2021-21:26:23] [V] [TRT] Tactic: 66319348402778770 Time: 0.050932 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_large_nt_v1 Tactic: 1931698692231796048 [10/04/2021-21:26:23] [V] [TRT] Tactic: 1931698692231796048 Time: 0.058504 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2004366221877065623 [10/04/2021-21:26:23] [V] [TRT] Tactic: 2004366221877065623 Time: 0.036756 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2169338034361422162 [10/04/2021-21:26:23] [V] [TRT] Tactic: 2169338034361422162 Time: 0.043144 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 2271687430539765460 [10/04/2021-21:26:23] [V] [TRT] Tactic: 2271687430539765460 Time: 0.072568 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16 Tactic: 2284815435928292401 [10/04/2021-21:26:23] [V] [TRT] Tactic: 2284815435928292401 Time: 0.076164 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: 3342635629009683930 [10/04/2021-21:26:23] [V] [TRT] Tactic: 3342635629009683930 Time: 0.038144 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3740557186499054067 [10/04/2021-21:26:23] [V] [TRT] Tactic: 3740557186499054067 Time: 0.054932 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:23] [V] [TRT] Tactic: 3768633326807889446 Time: 0.033416 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:23] [V] [TRT] Tactic: 5105539492142133503 Time: 0.03802 [10/04/2021-21:26:23] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: 6129427510024568065 [10/04/2021-21:26:24] [V] [TRT] Tactic: 6129427510024568065 Time: 0.037648 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16 Tactic: 7038629879025143810 [10/04/2021-21:26:24] [V] [TRT] Tactic: 7038629879025143810 Time: 0.07502 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 7039764449991095921 [10/04/2021-21:26:24] [V] [TRT] Tactic: 7039764449991095921 Time: 0.052728 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:24] [V] [TRT] Tactic: 7585914864117166414 Time: 0.033924 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -9134855779557081787 [10/04/2021-21:26:24] [V] [TRT] Tactic: -9134855779557081787 Time: 0.043152 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -9114895246540757312 [10/04/2021-21:26:24] [V] [TRT] Tactic: -9114895246540757312 Time: 0.05314 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8945506186161066102 [10/04/2021-21:26:24] [V] [TRT] Tactic: -8945506186161066102 Time: 0.052112 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_large_nt_v1 Tactic: -8787970778927801941 [10/04/2021-21:26:24] [V] [TRT] Tactic: -8787970778927801941 Time: 0.053508 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8707098593641355108 [10/04/2021-21:26:24] [V] [TRT] Tactic: -8707098593641355108 Time: 0.057472 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8343122771093605666 [10/04/2021-21:26:24] [V] [TRT] Tactic: -8343122771093605666 Time: 0.03802 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_large_nt_v1 Tactic: -8225786209923559953 [10/04/2021-21:26:24] [V] [TRT] Tactic: -8225786209923559953 Time: 0.073096 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_large_nt_v1 Tactic: -7373087278866484214 [10/04/2021-21:26:24] [V] [TRT] Tactic: -7373087278866484214 Time: 0.058104 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -7274936339335021260 [10/04/2021-21:26:24] [V] [TRT] Tactic: -7274936339335021260 Time: 0.057728 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16_t1r3s3 Tactic: -6068501086087743547 [10/04/2021-21:26:24] [V] [TRT] Tactic: -6068501086087743547 Time: 0.07272 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -5693797309970869451 [10/04/2021-21:26:24] [V] [TRT] Tactic: -5693797309970869451 Time: 0.047476 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -3036929044958869524 [10/04/2021-21:26:24] [V] [TRT] Tactic: -3036929044958869524 Time: 0.056844 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -2102888629196925141 [10/04/2021-21:26:24] [V] [TRT] Tactic: -2102888629196925141 Time: 0.052732 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16 Tactic: -1832766392358096151 [10/04/2021-21:26:24] [V] [TRT] Tactic: -1832766392358096151 Time: 0.062204 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -1467400415054408443 [10/04/2021-21:26:24] [V] [TRT] Tactic: -1467400415054408443 Time: 0.071944 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -674235064782459186 [10/04/2021-21:26:24] [V] [TRT] Tactic: -674235064782459186 Time: 0.071812 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: -629322288573675003 [10/04/2021-21:26:24] [V] [TRT] Tactic: -629322288573675003 Time: 0.038396 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -182858804213663094 [10/04/2021-21:26:24] [V] [TRT] Tactic: -182858804213663094 Time: 0.05248 [10/04/2021-21:26:24] [V] [TRT] Fastest Tactic: 3768633326807889446 Time: 0.033416 [10/04/2021-21:26:24] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 3768633326807889446 [10/04/2021-21:26:24] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:24] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:24] [V] [TRT] Tactic: 1002 Time: 0.032908 [10/04/2021-21:26:24] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:24] [V] [TRT] Tactic: 0 Time: 0.013944 [10/04/2021-21:26:24] [V] [TRT] Fastest Tactic: 0 Time: 0.013944 [10/04/2021-21:26:24] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:24] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:24] [V] [TRT] Tactic: 1002 Time: 0.03328 [10/04/2021-21:26:24] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:24] [V] [TRT] Tactic: 0 Time: 0.018816 [10/04/2021-21:26:24] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:24] [V] [TRT] Tactic: 1 Time: 0.018056 [10/04/2021-21:26:24] [V] [TRT] Fastest Tactic: 1 Time: 0.018056 [10/04/2021-21:26:24] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:24] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:24] [V] [TRT] Tactic: 1002 Time: 0.032764 [10/04/2021-21:26:24] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:24] [V] [TRT] Tactic: 0 Time: 0.013832 [10/04/2021-21:26:24] [V] [TRT] Fastest Tactic: 0 Time: 0.013832 [10/04/2021-21:26:24] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:24] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:24] [V] [TRT] Tactic: 1002 Time: 0.03326 [10/04/2021-21:26:24] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:24] [V] [TRT] Tactic: 0 Time: 0.018696 [10/04/2021-21:26:24] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:24] [V] [TRT] Tactic: 1 Time: 0.017264 [10/04/2021-21:26:24] [V] [TRT] Fastest Tactic: 1 Time: 0.017264 [10/04/2021-21:26:24] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:24] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:24] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:24] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu (FusedConvActConvolution) [10/04/2021-21:26:24] [V] [TRT] Tactic: 524287 Time: 0.04788 [10/04/2021-21:26:24] [V] [TRT] Tactic: 720895 Time: 0.062212 [10/04/2021-21:26:24] [V] [TRT] Tactic: 983039 Time: 0.048004 [10/04/2021-21:26:24] [V] [TRT] Tactic: 1048575 Time: 0.043264 [10/04/2021-21:26:24] [V] [TRT] Tactic: 1703935 Time: 0.03838 [10/04/2021-21:26:24] [V] [TRT] Tactic: 1966079 Time: 0.057472 [10/04/2021-21:26:24] [V] [TRT] Tactic: 2031615 Time: 0.062084 [10/04/2021-21:26:24] [V] [TRT] Tactic: 2228223 Time: 0.050568 [10/04/2021-21:26:24] [V] [TRT] Tactic: 2752511 Time: 0.061584 [10/04/2021-21:26:24] [V] [TRT] Tactic: 2818047 Time: 0.096132 [10/04/2021-21:26:24] [V] [TRT] Tactic: 2883583 Time: 0.077064 [10/04/2021-21:26:24] [V] [TRT] Tactic: 3014655 Time: 0.043272 [10/04/2021-21:26:24] [V] [TRT] Tactic: 3145727 Time: 0.055936 [10/04/2021-21:26:24] [V] [TRT] Tactic: 3473407 Time: 0.07258 [10/04/2021-21:26:24] [V] [TRT] Tactic: 3604479 Time: 0.043144 [10/04/2021-21:26:24] [V] [TRT] Tactic: 3735551 Time: 0.067316 [10/04/2021-21:26:24] [V] [TRT] Tactic: 4390911 Time: 0.075756 [10/04/2021-21:26:24] [V] [TRT] Tactic: 5046271 Time: 0.040064 [10/04/2021-21:26:24] [V] [TRT] Tactic: 5963775 Time: 0.066936 [10/04/2021-21:26:24] [V] [TRT] Tactic: 6160383 Time: 0.05248 [10/04/2021-21:26:24] [V] [TRT] Tactic: 6488063 Time: 0.047736 [10/04/2021-21:26:24] [V] [TRT] Tactic: 6881279 Time: 0.052872 [10/04/2021-21:26:24] [V] [TRT] Tactic: 7995391 Time: 0.048116 [10/04/2021-21:26:24] [V] [TRT] Tactic: 8585215 Time: 0.052876 [10/04/2021-21:26:24] [V] [TRT] Tactic: 8978431 Time: 0.062848 [10/04/2021-21:26:24] [V] [TRT] Tactic: 9043967 Time: 0.043128 [10/04/2021-21:26:24] [V] [TRT] Tactic: 9175039 Time: 0.042384 [10/04/2021-21:26:24] [V] [TRT] Tactic: 9502719 Time: 0.07642 [10/04/2021-21:26:24] [V] [TRT] Tactic: 9830399 Time: 0.081672 [10/04/2021-21:26:24] [V] [TRT] Tactic: 10027007 Time: 0.043396 [10/04/2021-21:26:24] [V] [TRT] Tactic: 10092543 Time: 0.075408 [10/04/2021-21:26:24] [V] [TRT] Tactic: 10289151 Time: 0.057484 [10/04/2021-21:26:24] [V] [TRT] Tactic: 10485759 Time: 0.038264 [10/04/2021-21:26:24] [V] [TRT] Tactic: 10813439 Time: 0.043028 [10/04/2021-21:26:24] [V] [TRT] Fastest Tactic: 10485759 Time: 0.038264 [10/04/2021-21:26:24] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu (CaskConvolution) [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 4438325421691896755 [10/04/2021-21:26:24] [V] [TRT] Tactic: 4438325421691896755 Time: 0.052236 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: 4581732244273465060 [10/04/2021-21:26:24] [V] [TRT] Tactic: 4581732244273465060 Time: 0.052108 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 4934335053031119367 [10/04/2021-21:26:24] [V] [TRT] Tactic: 4934335053031119367 Time: 0.058 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6797040896965118050 [10/04/2021-21:26:24] [V] [TRT] Tactic: 6797040896965118050 Time: 0.068228 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 8006952294591770973 [10/04/2021-21:26:24] [V] [TRT] Tactic: 8006952294591770973 Time: 0.053396 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -7210942453088153035 [10/04/2021-21:26:24] [V] [TRT] Tactic: -7210942453088153035 Time: 0.066432 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: -6282183216199417697 [10/04/2021-21:26:24] [V] [TRT] Tactic: -6282183216199417697 Time: 0.04838 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_nn_v1 Tactic: -5026383765466876607 [10/04/2021-21:26:24] [V] [TRT] Tactic: -5026383765466876607 Time: 0.068336 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: -5016725782072253841 [10/04/2021-21:26:24] [V] [TRT] Tactic: -5016725782072253841 Time: 0.052876 [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_nn_v1 Tactic: -1370999262391786833 [10/04/2021-21:26:24] [V] [TRT] Tactic: -1370999262391786833 Time: 0.054132 [10/04/2021-21:26:24] [V] [TRT] Fastest Tactic: -6282183216199417697 Time: 0.04838 [10/04/2021-21:26:24] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: FusedConvActConvolution Tactic: 10485759 [10/04/2021-21:26:24] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:24] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu (CaskConvolution) [10/04/2021-21:26:24] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_nn_v1 Tactic: 1213457772632185722 [10/04/2021-21:26:24] [V] [TRT] Tactic: 1213457772632185722 Time: 0.055532 [10/04/2021-21:26:25] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_nn_v1 Tactic: 1713441381477652893 [10/04/2021-21:26:25] [V] [TRT] Tactic: 1713441381477652893 Time: 0.052868 [10/04/2021-21:26:25] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_c32_nn_v1 Tactic: 7125598890155666458 [10/04/2021-21:26:25] [V] [TRT] Tactic: 7125598890155666458 Time: 0.054144 [10/04/2021-21:26:25] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_nn_v1 Tactic: 8047041638267142825 [10/04/2021-21:26:25] [V] [TRT] Tactic: 8047041638267142825 Time: 0.048008 [10/04/2021-21:26:25] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_nn_v1 Tactic: -7846982807478255793 [10/04/2021-21:26:25] [V] [TRT] Tactic: -7846982807478255793 Time: 0.048148 [10/04/2021-21:26:25] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_nn_v1 Tactic: -6459719113600909000 [10/04/2021-21:26:25] [V] [TRT] Tactic: -6459719113600909000 Time: 0.05272 [10/04/2021-21:26:25] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_nn_v1 Tactic: -4573925292554651334 [10/04/2021-21:26:25] [V] [TRT] Tactic: -4573925292554651334 Time: 0.06578 [10/04/2021-21:26:25] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_nn_v1 Tactic: -3566249366964946311 [10/04/2021-21:26:25] [V] [TRT] Tactic: -3566249366964946311 Time: 0.053252 [10/04/2021-21:26:25] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_nn_v1 Tactic: -2002418013575043687 [10/04/2021-21:26:25] [V] [TRT] Tactic: -2002418013575043687 Time: 0.067728 [10/04/2021-21:26:25] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_c32_nn_v1 Tactic: -1659631603542281459 [10/04/2021-21:26:25] [V] [TRT] Tactic: -1659631603542281459 Time: 0.068096 [10/04/2021-21:26:25] [V] [TRT] Fastest Tactic: 8047041638267142825 Time: 0.048008 [10/04/2021-21:26:25] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 8047041638267142825 [10/04/2021-21:26:25] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:25] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:25] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:25] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu (FusedConvActConvolution) [10/04/2021-21:26:25] [V] [TRT] Tactic: 524287 Time: 0.048012 [10/04/2021-21:26:25] [V] [TRT] Tactic: 720895 Time: 0.056456 [10/04/2021-21:26:25] [V] [TRT] Tactic: 983039 Time: 0.047876 [10/04/2021-21:26:25] [V] [TRT] Tactic: 1048575 Time: 0.043276 [10/04/2021-21:26:25] [V] [TRT] Tactic: 1703935 Time: 0.038252 [10/04/2021-21:26:25] [V] [TRT] Tactic: 1966079 Time: 0.05786 [10/04/2021-21:26:25] [V] [TRT] Tactic: 2031615 Time: 0.061808 [10/04/2021-21:26:25] [V] [TRT] Tactic: 2228223 Time: 0.048768 [10/04/2021-21:26:25] [V] [TRT] Tactic: 2752511 Time: 0.061292 [10/04/2021-21:26:25] [V] [TRT] Tactic: 2818047 Time: 0.096124 [10/04/2021-21:26:25] [V] [TRT] Tactic: 2883583 Time: 0.07718 [10/04/2021-21:26:25] [V] [TRT] Tactic: 3014655 Time: 0.042484 [10/04/2021-21:26:25] [V] [TRT] Tactic: 3145727 Time: 0.052868 [10/04/2021-21:26:25] [V] [TRT] Tactic: 3473407 Time: 0.085516 [10/04/2021-21:26:25] [V] [TRT] Tactic: 3604479 Time: 0.042012 [10/04/2021-21:26:25] [V] [TRT] Tactic: 3735551 Time: 0.068744 [10/04/2021-21:26:25] [V] [TRT] Tactic: 4390911 Time: 0.07666 [10/04/2021-21:26:25] [V] [TRT] Tactic: 5046271 Time: 0.043272 [10/04/2021-21:26:25] [V] [TRT] Tactic: 5963775 Time: 0.06732 [10/04/2021-21:26:25] [V] [TRT] Tactic: 6160383 Time: 0.05276 [10/04/2021-21:26:25] [V] [TRT] Tactic: 6488063 Time: 0.049168 [10/04/2021-21:26:25] [V] [TRT] Tactic: 6881279 Time: 0.056324 [10/04/2021-21:26:25] [V] [TRT] Tactic: 7995391 Time: 0.048012 [10/04/2021-21:26:25] [V] [TRT] Tactic: 8585215 Time: 0.054156 [10/04/2021-21:26:25] [V] [TRT] Tactic: 8978431 Time: 0.065536 [10/04/2021-21:26:25] [V] [TRT] Tactic: 9043967 Time: 0.043144 [10/04/2021-21:26:25] [V] [TRT] Tactic: 9175039 Time: 0.04288 [10/04/2021-21:26:25] [V] [TRT] Tactic: 9502719 Time: 0.076152 [10/04/2021-21:26:25] [V] [TRT] Tactic: 9830399 Time: 0.081156 [10/04/2021-21:26:25] [V] [TRT] Tactic: 10027007 Time: 0.043128 [10/04/2021-21:26:25] [V] [TRT] Tactic: 10092543 Time: 0.076572 [10/04/2021-21:26:25] [V] [TRT] Tactic: 10289151 Time: 0.0576 [10/04/2021-21:26:25] [V] [TRT] Tactic: 10485759 Time: 0.038288 [10/04/2021-21:26:25] [V] [TRT] Tactic: 10813439 Time: 0.04288 [10/04/2021-21:26:25] [V] [TRT] Fastest Tactic: 1703935 Time: 0.038252 [10/04/2021-21:26:25] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu (CudaGroupConvolution) [10/04/2021-21:26:25] [V] [TRT] Tactic: 0 Time: 0.105868 [10/04/2021-21:26:25] [V] [TRT] Tactic: 1 Time: 0.165636 [10/04/2021-21:26:25] [V] [TRT] Tactic: 2 Time: 0.272648 [10/04/2021-21:26:25] [V] [TRT] Tactic: 3 Time: 0.462604 [10/04/2021-21:26:25] [V] [TRT] Tactic: 4 Time: 0.871048 [10/04/2021-21:26:25] [V] [TRT] Tactic: 5 Time: 1.6887 [10/04/2021-21:26:25] [V] [TRT] Tactic: 6 Time: 0.091784 [10/04/2021-21:26:25] [V] [TRT] Tactic: 7 Time: 0.15206 [10/04/2021-21:26:25] [V] [TRT] Tactic: 8 Time: 0.259188 [10/04/2021-21:26:25] [V] [TRT] Tactic: 9 Time: 0.4457 [10/04/2021-21:26:25] [V] [TRT] Tactic: 10 Time: 0.85378 [10/04/2021-21:26:25] [V] [TRT] Tactic: 11 Time: 1.66964 [10/04/2021-21:26:25] [V] [TRT] Tactic: 12 Time: 0.086384 [10/04/2021-21:26:25] [V] [TRT] Tactic: 13 Time: 0.14414 [10/04/2021-21:26:25] [V] [TRT] Tactic: 14 Time: 0.249604 [10/04/2021-21:26:25] [V] [TRT] Tactic: 15 Time: 0.436496 [10/04/2021-21:26:25] [V] [TRT] Tactic: 16 Time: 0.84354 [10/04/2021-21:26:25] [V] [TRT] Tactic: 17 Time: 1.6572 [10/04/2021-21:26:25] [V] [TRT] Tactic: 18 Time: 0.072972 [10/04/2021-21:26:25] [V] [TRT] Tactic: 19 Time: 0.105348 [10/04/2021-21:26:25] [V] [TRT] Tactic: 20 Time: 0.167808 [10/04/2021-21:26:25] [V] [TRT] Tactic: 21 Time: 0.278524 [10/04/2021-21:26:25] [V] [TRT] Tactic: 22 Time: 0.478984 [10/04/2021-21:26:25] [V] [TRT] Tactic: 23 Time: 0.898952 [10/04/2021-21:26:26] [V] [TRT] Tactic: 24 Time: 1.74182 [10/04/2021-21:26:26] [V] [TRT] Tactic: 25 Time: 0.07286 [10/04/2021-21:26:26] [V] [TRT] Tactic: 26 Time: 0.091128 [10/04/2021-21:26:26] [V] [TRT] Tactic: 27 Time: 0.153984 [10/04/2021-21:26:26] [V] [TRT] Tactic: 28 Time: 0.262772 [10/04/2021-21:26:26] [V] [TRT] Tactic: 29 Time: 0.460292 [10/04/2021-21:26:26] [V] [TRT] Tactic: 30 Time: 0.88052 [10/04/2021-21:26:26] [V] [TRT] Tactic: 31 Time: 1.72122 [10/04/2021-21:26:26] [V] [TRT] Tactic: 32 Time: 0.062704 [10/04/2021-21:26:26] [V] [TRT] Tactic: 33 Time: 0.085644 [10/04/2021-21:26:26] [V] [TRT] Tactic: 34 Time: 0.146044 [10/04/2021-21:26:26] [V] [TRT] Tactic: 35 Time: 0.254208 [10/04/2021-21:26:26] [V] [TRT] Tactic: 36 Time: 0.446092 [10/04/2021-21:26:26] [V] [TRT] Tactic: 37 Time: 0.866568 [10/04/2021-21:26:26] [V] [TRT] Tactic: 38 Time: 1.70804 [10/04/2021-21:26:26] [V] [TRT] Tactic: 39 Time: 0.062448 [10/04/2021-21:26:26] [V] [TRT] Tactic: 40 Time: 0.0745 [10/04/2021-21:26:26] [V] [TRT] Tactic: 41 Time: 0.105632 [10/04/2021-21:26:26] [V] [TRT] Tactic: 42 Time: 0.168176 [10/04/2021-21:26:26] [V] [TRT] Tactic: 43 Time: 0.28186 [10/04/2021-21:26:26] [V] [TRT] Tactic: 44 Time: 0.484872 [10/04/2021-21:26:26] [V] [TRT] Tactic: 45 Time: 0.912256 [10/04/2021-21:26:26] [V] [TRT] Tactic: 46 Time: 1.76551 [10/04/2021-21:26:26] [V] [TRT] Tactic: 47 Time: 0.05452 [10/04/2021-21:26:26] [V] [TRT] Tactic: 48 Time: 0.07218 [10/04/2021-21:26:26] [V] [TRT] Tactic: 49 Time: 0.091376 [10/04/2021-21:26:26] [V] [TRT] Tactic: 50 Time: 0.154372 [10/04/2021-21:26:26] [V] [TRT] Tactic: 51 Time: 0.263676 [10/04/2021-21:26:26] [V] [TRT] Tactic: 52 Time: 0.461188 [10/04/2021-21:26:26] [V] [TRT] Tactic: 53 Time: 0.8888 [10/04/2021-21:26:26] [V] [TRT] Tactic: 54 Time: 1.74171 [10/04/2021-21:26:26] [V] [TRT] Tactic: 55 Time: 0.065544 [10/04/2021-21:26:26] [V] [TRT] Tactic: 56 Time: 0.06708 [10/04/2021-21:26:26] [V] [TRT] Tactic: 57 Time: 0.086268 [10/04/2021-21:26:26] [V] [TRT] Tactic: 58 Time: 0.148872 [10/04/2021-21:26:26] [V] [TRT] Tactic: 59 Time: 0.257796 [10/04/2021-21:26:26] [V] [TRT] Tactic: 60 Time: 0.45492 [10/04/2021-21:26:26] [V] [TRT] Tactic: 61 Time: 0.88142 [10/04/2021-21:26:26] [V] [TRT] Tactic: 62 Time: 1.73173 [10/04/2021-21:26:26] [V] [TRT] Fastest Tactic: 47 Time: 0.05452 [10/04/2021-21:26:26] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu (CaskConvolution) [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 66319348402778770 [10/04/2021-21:26:26] [V] [TRT] Tactic: 66319348402778770 Time: 0.051216 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_large_nt_v1 Tactic: 1931698692231796048 [10/04/2021-21:26:26] [V] [TRT] Tactic: 1931698692231796048 Time: 0.058868 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2004366221877065623 [10/04/2021-21:26:26] [V] [TRT] Tactic: 2004366221877065623 Time: 0.034804 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2169338034361422162 [10/04/2021-21:26:26] [V] [TRT] Tactic: 2169338034361422162 Time: 0.042748 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 2271687430539765460 [10/04/2021-21:26:26] [V] [TRT] Tactic: 2271687430539765460 Time: 0.072716 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16 Tactic: 2284815435928292401 [10/04/2021-21:26:26] [V] [TRT] Tactic: 2284815435928292401 Time: 0.07656 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: 3342635629009683930 [10/04/2021-21:26:26] [V] [TRT] Tactic: 3342635629009683930 Time: 0.038524 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3740557186499054067 [10/04/2021-21:26:26] [V] [TRT] Tactic: 3740557186499054067 Time: 0.05428 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:26] [V] [TRT] Tactic: 3768633326807889446 Time: 0.033396 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:26] [V] [TRT] Tactic: 5105539492142133503 Time: 0.038288 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: 6129427510024568065 [10/04/2021-21:26:26] [V] [TRT] Tactic: 6129427510024568065 Time: 0.037752 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16 Tactic: 7038629879025143810 [10/04/2021-21:26:26] [V] [TRT] Tactic: 7038629879025143810 Time: 0.076412 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 7039764449991095921 [10/04/2021-21:26:26] [V] [TRT] Tactic: 7039764449991095921 Time: 0.053364 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:26] [V] [TRT] Tactic: 7585914864117166414 Time: 0.033804 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -9134855779557081787 [10/04/2021-21:26:26] [V] [TRT] Tactic: -9134855779557081787 Time: 0.043392 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -9114895246540757312 [10/04/2021-21:26:26] [V] [TRT] Tactic: -9114895246540757312 Time: 0.053756 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8945506186161066102 [10/04/2021-21:26:26] [V] [TRT] Tactic: -8945506186161066102 Time: 0.051316 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_large_nt_v1 Tactic: -8787970778927801941 [10/04/2021-21:26:26] [V] [TRT] Tactic: -8787970778927801941 Time: 0.054144 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8707098593641355108 [10/04/2021-21:26:26] [V] [TRT] Tactic: -8707098593641355108 Time: 0.058116 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8343122771093605666 [10/04/2021-21:26:26] [V] [TRT] Tactic: -8343122771093605666 Time: 0.03828 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_large_nt_v1 Tactic: -8225786209923559953 [10/04/2021-21:26:26] [V] [TRT] Tactic: -8225786209923559953 Time: 0.073976 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_large_nt_v1 Tactic: -7373087278866484214 [10/04/2021-21:26:26] [V] [TRT] Tactic: -7373087278866484214 Time: 0.056596 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -7274936339335021260 [10/04/2021-21:26:26] [V] [TRT] Tactic: -7274936339335021260 Time: 0.057848 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16_t1r3s3 Tactic: -6068501086087743547 [10/04/2021-21:26:26] [V] [TRT] Tactic: -6068501086087743547 Time: 0.072444 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -5693797309970869451 [10/04/2021-21:26:26] [V] [TRT] Tactic: -5693797309970869451 Time: 0.04838 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -3036929044958869524 [10/04/2021-21:26:26] [V] [TRT] Tactic: -3036929044958869524 Time: 0.05684 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -2102888629196925141 [10/04/2021-21:26:26] [V] [TRT] Tactic: -2102888629196925141 Time: 0.053012 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16 Tactic: -1832766392358096151 [10/04/2021-21:26:26] [V] [TRT] Tactic: -1832766392358096151 Time: 0.063004 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -1467400415054408443 [10/04/2021-21:26:26] [V] [TRT] Tactic: -1467400415054408443 Time: 0.071948 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -674235064782459186 [10/04/2021-21:26:26] [V] [TRT] Tactic: -674235064782459186 Time: 0.072452 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: -629322288573675003 [10/04/2021-21:26:26] [V] [TRT] Tactic: -629322288573675003 Time: 0.038528 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -182858804213663094 [10/04/2021-21:26:26] [V] [TRT] Tactic: -182858804213663094 Time: 0.05298 [10/04/2021-21:26:26] [V] [TRT] Fastest Tactic: 3768633326807889446 Time: 0.033396 [10/04/2021-21:26:26] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 3768633326807889446 [10/04/2021-21:26:26] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:26] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:26] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:26] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:26] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:26] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:26] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:26] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu (FusedConvActConvolution) [10/04/2021-21:26:26] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:26] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu (CaskConvolution) [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 4438325421691896755 [10/04/2021-21:26:26] [V] [TRT] Tactic: 4438325421691896755 Time: 0.061304 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: 4581732244273465060 [10/04/2021-21:26:26] [V] [TRT] Tactic: 4581732244273465060 Time: 0.056576 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 4934335053031119367 [10/04/2021-21:26:26] [V] [TRT] Tactic: 4934335053031119367 Time: 0.06644 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6797040896965118050 [10/04/2021-21:26:26] [V] [TRT] Tactic: 6797040896965118050 Time: 0.07578 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 8006952294591770973 [10/04/2021-21:26:26] [V] [TRT] Tactic: 8006952294591770973 Time: 0.06144 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -7210942453088153035 [10/04/2021-21:26:26] [V] [TRT] Tactic: -7210942453088153035 Time: 0.07452 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: -6282183216199417697 [10/04/2021-21:26:26] [V] [TRT] Tactic: -6282183216199417697 Time: 0.056572 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_nn_v1 Tactic: -5026383765466876607 [10/04/2021-21:26:26] [V] [TRT] Tactic: -5026383765466876607 Time: 0.076948 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: -5016725782072253841 [10/04/2021-21:26:26] [V] [TRT] Tactic: -5016725782072253841 Time: 0.060928 [10/04/2021-21:26:26] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_nn_v1 Tactic: -1370999262391786833 [10/04/2021-21:26:27] [V] [TRT] Tactic: -1370999262391786833 Time: 0.062732 [10/04/2021-21:26:27] [V] [TRT] Fastest Tactic: -6282183216199417697 Time: 0.056572 [10/04/2021-21:26:27] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6282183216199417697 [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu (CaskConvolution) [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_nn_v1 Tactic: 1213457772632185722 [10/04/2021-21:26:27] [V] [TRT] Tactic: 1213457772632185722 Time: 0.062308 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_nn_v1 Tactic: 1713441381477652893 [10/04/2021-21:26:27] [V] [TRT] Tactic: 1713441381477652893 Time: 0.06212 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_c32_nn_v1 Tactic: 7125598890155666458 [10/04/2021-21:26:27] [V] [TRT] Tactic: 7125598890155666458 Time: 0.063488 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_nn_v1 Tactic: 8047041638267142825 [10/04/2021-21:26:27] [V] [TRT] Tactic: 8047041638267142825 Time: 0.055428 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_nn_v1 Tactic: -7846982807478255793 [10/04/2021-21:26:27] [V] [TRT] Tactic: -7846982807478255793 Time: 0.057344 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_nn_v1 Tactic: -6459719113600909000 [10/04/2021-21:26:27] [V] [TRT] Tactic: -6459719113600909000 Time: 0.057724 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_nn_v1 Tactic: -4573925292554651334 [10/04/2021-21:26:27] [V] [TRT] Tactic: -4573925292554651334 Time: 0.072332 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_nn_v1 Tactic: -3566249366964946311 [10/04/2021-21:26:27] [V] [TRT] Tactic: -3566249366964946311 Time: 0.062332 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_nn_v1 Tactic: -2002418013575043687 [10/04/2021-21:26:27] [V] [TRT] Tactic: -2002418013575043687 Time: 0.072552 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_c32_nn_v1 Tactic: -1659631603542281459 [10/04/2021-21:26:27] [V] [TRT] Tactic: -1659631603542281459 Time: 0.073104 [10/04/2021-21:26:27] [V] [TRT] Fastest Tactic: 8047041638267142825 Time: 0.055428 [10/04/2021-21:26:27] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 8047041638267142825 [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:27] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu (FusedConvActConvolution) [10/04/2021-21:26:27] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu (CudaGroupConvolution) [10/04/2021-21:26:27] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu (CaskConvolution) [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 66319348402778770 [10/04/2021-21:26:27] [V] [TRT] Tactic: 66319348402778770 Time: 0.055676 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_large_nt_v1 Tactic: 1931698692231796048 [10/04/2021-21:26:27] [V] [TRT] Tactic: 1931698692231796048 Time: 0.06424 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2004366221877065623 [10/04/2021-21:26:27] [V] [TRT] Tactic: 2004366221877065623 Time: 0.038148 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2169338034361422162 [10/04/2021-21:26:27] [V] [TRT] Tactic: 2169338034361422162 Time: 0.043364 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 2271687430539765460 [10/04/2021-21:26:27] [V] [TRT] Tactic: 2271687430539765460 Time: 0.08244 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16 Tactic: 2284815435928292401 [10/04/2021-21:26:27] [V] [TRT] Tactic: 2284815435928292401 Time: 0.077704 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: 3342635629009683930 [10/04/2021-21:26:27] [V] [TRT] Tactic: 3342635629009683930 Time: 0.041624 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3740557186499054067 [10/04/2021-21:26:27] [V] [TRT] Tactic: 3740557186499054067 Time: 0.06108 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:27] [V] [TRT] Tactic: 3768633326807889446 Time: 0.03366 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:27] [V] [TRT] Tactic: 5105539492142133503 Time: 0.038388 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: 6129427510024568065 [10/04/2021-21:26:27] [V] [TRT] Tactic: 6129427510024568065 Time: 0.029064 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16 Tactic: 7038629879025143810 [10/04/2021-21:26:27] [V] [TRT] Tactic: 7038629879025143810 Time: 0.057568 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 7039764449991095921 [10/04/2021-21:26:27] [V] [TRT] Tactic: 7039764449991095921 Time: 0.04578 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:27] [V] [TRT] Tactic: 7585914864117166414 Time: 0.02788 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -9134855779557081787 [10/04/2021-21:26:27] [V] [TRT] Tactic: -9134855779557081787 Time: 0.035152 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -9114895246540757312 [10/04/2021-21:26:27] [V] [TRT] Tactic: -9114895246540757312 Time: 0.046084 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8945506186161066102 [10/04/2021-21:26:27] [V] [TRT] Tactic: -8945506186161066102 Time: 0.039752 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_large_nt_v1 Tactic: -8787970778927801941 [10/04/2021-21:26:27] [V] [TRT] Tactic: -8787970778927801941 Time: 0.04608 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8707098593641355108 [10/04/2021-21:26:27] [V] [TRT] Tactic: -8707098593641355108 Time: 0.049496 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8343122771093605666 [10/04/2021-21:26:27] [V] [TRT] Tactic: -8343122771093605666 Time: 0.030192 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_large_nt_v1 Tactic: -8225786209923559953 [10/04/2021-21:26:27] [V] [TRT] Tactic: -8225786209923559953 Time: 0.061448 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_large_nt_v1 Tactic: -7373087278866484214 [10/04/2021-21:26:27] [V] [TRT] Tactic: -7373087278866484214 Time: 0.046064 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -7274936339335021260 [10/04/2021-21:26:27] [V] [TRT] Tactic: -7274936339335021260 Time: 0.046064 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16_t1r3s3 Tactic: -6068501086087743547 [10/04/2021-21:26:27] [V] [TRT] Tactic: -6068501086087743547 Time: 0.057104 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -5693797309970869451 [10/04/2021-21:26:27] [V] [TRT] Tactic: -5693797309970869451 Time: 0.037952 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -3036929044958869524 [10/04/2021-21:26:27] [V] [TRT] Tactic: -3036929044958869524 Time: 0.045808 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -2102888629196925141 [10/04/2021-21:26:27] [V] [TRT] Tactic: -2102888629196925141 Time: 0.045428 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16 Tactic: -1832766392358096151 [10/04/2021-21:26:27] [V] [TRT] Tactic: -1832766392358096151 Time: 0.051832 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -1467400415054408443 [10/04/2021-21:26:27] [V] [TRT] Tactic: -1467400415054408443 Time: 0.057328 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -674235064782459186 [10/04/2021-21:26:27] [V] [TRT] Tactic: -674235064782459186 Time: 0.061032 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: -629322288573675003 [10/04/2021-21:26:27] [V] [TRT] Tactic: -629322288573675003 Time: 0.0302 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -182858804213663094 [10/04/2021-21:26:27] [V] [TRT] Tactic: -182858804213663094 Time: 0.042216 [10/04/2021-21:26:27] [V] [TRT] Fastest Tactic: 7585914864117166414 Time: 0.02788 [10/04/2021-21:26:27] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 7585914864117166414 [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1), Int8(1024,1024:32,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:27] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu (CaskConvolution) [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 4438325421691896755 [10/04/2021-21:26:27] [V] [TRT] Tactic: 4438325421691896755 Time: 0.038848 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: 4581732244273465060 [10/04/2021-21:26:27] [V] [TRT] Tactic: 4581732244273465060 Time: 0.037484 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 4934335053031119367 [10/04/2021-21:26:27] [V] [TRT] Tactic: 4934335053031119367 Time: 0.040888 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6797040896965118050 [10/04/2021-21:26:27] [V] [TRT] Tactic: 6797040896965118050 Time: 0.048852 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 8006952294591770973 [10/04/2021-21:26:27] [V] [TRT] Tactic: 8006952294591770973 Time: 0.042068 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -7210942453088153035 [10/04/2021-21:26:27] [V] [TRT] Tactic: -7210942453088153035 Time: 0.049192 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: -6282183216199417697 [10/04/2021-21:26:27] [V] [TRT] Tactic: -6282183216199417697 Time: 0.034732 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_nn_v1 Tactic: -5026383765466876607 [10/04/2021-21:26:27] [V] [TRT] Tactic: -5026383765466876607 Time: 0.052856 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: -5016725782072253841 [10/04/2021-21:26:27] [V] [TRT] Tactic: -5016725782072253841 Time: 0.038756 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_nn_v1 Tactic: -1370999262391786833 [10/04/2021-21:26:27] [V] [TRT] Tactic: -1370999262391786833 Time: 0.042216 [10/04/2021-21:26:27] [V] [TRT] Fastest Tactic: -6282183216199417697 Time: 0.034732 [10/04/2021-21:26:27] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6282183216199417697 [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu (CaskConvolution) [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_nn_v1 Tactic: 1213457772632185722 [10/04/2021-21:26:27] [V] [TRT] Tactic: 1213457772632185722 Time: 0.040452 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_nn_v1 Tactic: 1713441381477652893 [10/04/2021-21:26:27] [V] [TRT] Tactic: 1713441381477652893 Time: 0.04224 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_c32_nn_v1 Tactic: 7125598890155666458 [10/04/2021-21:26:27] [V] [TRT] Tactic: 7125598890155666458 Time: 0.041856 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_nn_v1 Tactic: 8047041638267142825 [10/04/2021-21:26:27] [V] [TRT] Tactic: 8047041638267142825 Time: 0.035912 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_nn_v1 Tactic: -7846982807478255793 [10/04/2021-21:26:27] [V] [TRT] Tactic: -7846982807478255793 Time: 0.03762 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_nn_v1 Tactic: -6459719113600909000 [10/04/2021-21:26:27] [V] [TRT] Tactic: -6459719113600909000 Time: 0.037864 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_nn_v1 Tactic: -4573925292554651334 [10/04/2021-21:26:27] [V] [TRT] Tactic: -4573925292554651334 Time: 0.049868 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_nn_v1 Tactic: -3566249366964946311 [10/04/2021-21:26:27] [V] [TRT] Tactic: -3566249366964946311 Time: 0.038512 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_nn_v1 Tactic: -2002418013575043687 [10/04/2021-21:26:27] [V] [TRT] Tactic: -2002418013575043687 Time: 0.049904 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_c32_nn_v1 Tactic: -1659631603542281459 [10/04/2021-21:26:27] [V] [TRT] Tactic: -1659631603542281459 Time: 0.049872 [10/04/2021-21:26:27] [V] [TRT] Fastest Tactic: 8047041638267142825 Time: 0.035912 [10/04/2021-21:26:27] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 8047041638267142825 [10/04/2021-21:26:27] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:27] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu (CudaGroupConvolution) [10/04/2021-21:26:27] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu (CaskConvolution) [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 66319348402778770 [10/04/2021-21:26:27] [V] [TRT] Tactic: 66319348402778770 Time: 0.03776 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_large_nt_v1 Tactic: 1931698692231796048 [10/04/2021-21:26:27] [V] [TRT] Tactic: 1931698692231796048 Time: 0.042132 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2004366221877065623 [10/04/2021-21:26:27] [V] [TRT] Tactic: 2004366221877065623 Time: 0.022392 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2169338034361422162 [10/04/2021-21:26:27] [V] [TRT] Tactic: 2169338034361422162 Time: 0.030476 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 2271687430539765460 [10/04/2021-21:26:27] [V] [TRT] Tactic: 2271687430539765460 Time: 0.053492 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16 Tactic: 2284815435928292401 [10/04/2021-21:26:27] [V] [TRT] Tactic: 2284815435928292401 Time: 0.057072 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: 3342635629009683930 [10/04/2021-21:26:27] [V] [TRT] Tactic: 3342635629009683930 Time: 0.022404 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3740557186499054067 [10/04/2021-21:26:27] [V] [TRT] Tactic: 3740557186499054067 Time: 0.038404 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:27] [V] [TRT] Tactic: 3768633326807889446 Time: 0.022492 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:27] [V] [TRT] Tactic: 5105539492142133503 Time: 0.022152 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: 6129427510024568065 [10/04/2021-21:26:27] [V] [TRT] Tactic: 6129427510024568065 Time: 0.02266 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16 Tactic: 7038629879025143810 [10/04/2021-21:26:27] [V] [TRT] Tactic: 7038629879025143810 Time: 0.057208 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 7039764449991095921 [10/04/2021-21:26:27] [V] [TRT] Tactic: 7039764449991095921 Time: 0.0404 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:27] [V] [TRT] Tactic: 7585914864117166414 Time: 0.02402 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -9134855779557081787 [10/04/2021-21:26:27] [V] [TRT] Tactic: -9134855779557081787 Time: 0.0341 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -9114895246540757312 [10/04/2021-21:26:27] [V] [TRT] Tactic: -9114895246540757312 Time: 0.04198 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8945506186161066102 [10/04/2021-21:26:27] [V] [TRT] Tactic: -8945506186161066102 Time: 0.031808 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_large_nt_v1 Tactic: -8787970778927801941 [10/04/2021-21:26:27] [V] [TRT] Tactic: -8787970778927801941 Time: 0.042232 [10/04/2021-21:26:27] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8707098593641355108 [10/04/2021-21:26:28] [V] [TRT] Tactic: -8707098593641355108 Time: 0.04336 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8343122771093605666 [10/04/2021-21:26:28] [V] [TRT] Tactic: -8343122771093605666 Time: 0.026344 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_large_nt_v1 Tactic: -8225786209923559953 [10/04/2021-21:26:28] [V] [TRT] Tactic: -8225786209923559953 Time: 0.054624 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_large_nt_v1 Tactic: -7373087278866484214 [10/04/2021-21:26:28] [V] [TRT] Tactic: -7373087278866484214 Time: 0.04232 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -7274936339335021260 [10/04/2021-21:26:28] [V] [TRT] Tactic: -7274936339335021260 Time: 0.0422 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16_t1r3s3 Tactic: -6068501086087743547 [10/04/2021-21:26:28] [V] [TRT] Tactic: -6068501086087743547 Time: 0.053216 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -5693797309970869451 [10/04/2021-21:26:28] [V] [TRT] Tactic: -5693797309970869451 Time: 0.030176 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -3036929044958869524 [10/04/2021-21:26:28] [V] [TRT] Tactic: -3036929044958869524 Time: 0.041716 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -2102888629196925141 [10/04/2021-21:26:28] [V] [TRT] Tactic: -2102888629196925141 Time: 0.038016 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16 Tactic: -1832766392358096151 [10/04/2021-21:26:28] [V] [TRT] Tactic: -1832766392358096151 Time: 0.045448 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -1467400415054408443 [10/04/2021-21:26:28] [V] [TRT] Tactic: -1467400415054408443 Time: 0.053824 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -674235064782459186 [10/04/2021-21:26:28] [V] [TRT] Tactic: -674235064782459186 Time: 0.053452 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: -629322288573675003 [10/04/2021-21:26:28] [V] [TRT] Tactic: -629322288573675003 Time: 0.02622 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -182858804213663094 [10/04/2021-21:26:28] [V] [TRT] Tactic: -182858804213663094 Time: 0.037972 [10/04/2021-21:26:28] [V] [TRT] Fastest Tactic: 5105539492142133503 Time: 0.022152 [10/04/2021-21:26:28] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 5105539492142133503 [10/04/2021-21:26:28] [V] [TRT] *************** Autotuning Reformat:Int8(4096,1024:4,32,1) -> Int8(1024,1024:32,32,1) *************** [10/04/2021-21:26:28] [V] [TRT] *************** Autotuning Reformat:Int8(1024,1024:32,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:26:28] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd (CudaDepthwiseConvolution) [10/04/2021-21:26:28] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd (FusedConvActConvolution) [10/04/2021-21:26:28] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd (CaskConvolution) [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_interior_nn_v1 Tactic: 3145259992339075399 [10/04/2021-21:26:28] [V] [TRT] Tactic: 3145259992339075399 Time: 0.026296 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_interior_nn_v1 Tactic: 4000990898022781625 [10/04/2021-21:26:28] [V] [TRT] Tactic: 4000990898022781625 Time: 0.030744 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 4438325421691896755 [10/04/2021-21:26:28] [V] [TRT] Tactic: 4438325421691896755 Time: 0.026816 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: 4581732244273465060 [10/04/2021-21:26:28] [V] [TRT] Tactic: 4581732244273465060 Time: 0.026512 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 4934335053031119367 [10/04/2021-21:26:28] [V] [TRT] Tactic: 4934335053031119367 Time: 0.026748 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6797040896965118050 [10/04/2021-21:26:28] [V] [TRT] Tactic: 6797040896965118050 Time: 0.030692 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 8006952294591770973 [10/04/2021-21:26:28] [V] [TRT] Tactic: 8006952294591770973 Time: 0.026604 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_interior_nn_v1 Tactic: 8097855305881829878 [10/04/2021-21:26:28] [V] [TRT] Tactic: 8097855305881829878 Time: 0.024948 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -7210942453088153035 [10/04/2021-21:26:28] [V] [TRT] Tactic: -7210942453088153035 Time: 0.030288 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: -6282183216199417697 [10/04/2021-21:26:28] [V] [TRT] Tactic: -6282183216199417697 Time: 0.023392 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: -5016725782072253841 [10/04/2021-21:26:28] [V] [TRT] Tactic: -5016725782072253841 Time: 0.026212 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_interior_nn_v1 Tactic: -1543391652455542154 [10/04/2021-21:26:28] [V] [TRT] Tactic: -1543391652455542154 Time: 0.026732 [10/04/2021-21:26:28] [V] [TRT] Fastest Tactic: -6282183216199417697 Time: 0.023392 [10/04/2021-21:26:28] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6282183216199417697 [10/04/2021-21:26:28] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd (CaskConvolution) [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_interior_c32_nn_v1 Tactic: 1025026069226666066 [10/04/2021-21:26:28] [V] [TRT] Tactic: 1025026069226666066 Time: 0.0307 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_nn_v1 Tactic: 1213457772632185722 [10/04/2021-21:26:28] [V] [TRT] Tactic: 1213457772632185722 Time: 0.026736 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_nn_v1 Tactic: 1713441381477652893 [10/04/2021-21:26:28] [V] [TRT] Tactic: 1713441381477652893 Time: 0.022776 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_interior_c32_nn_v1 Tactic: 2339361327868109050 [10/04/2021-21:26:28] [V] [TRT] Tactic: 2339361327868109050 Time: 0.026732 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_nn_v1 Tactic: 8047041638267142825 [10/04/2021-21:26:28] [V] [TRT] Tactic: 8047041638267142825 Time: 0.02376 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_nn_v1 Tactic: -7846982807478255793 [10/04/2021-21:26:28] [V] [TRT] Tactic: -7846982807478255793 Time: 0.026572 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_interior_c32_nn_v1 Tactic: -7686150779628967382 [10/04/2021-21:26:28] [V] [TRT] Tactic: -7686150779628967382 Time: 0.026632 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_nn_v1 Tactic: -6459719113600909000 [10/04/2021-21:26:28] [V] [TRT] Tactic: -6459719113600909000 Time: 0.026392 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_nn_v1 Tactic: -4573925292554651334 [10/04/2021-21:26:28] [V] [TRT] Tactic: -4573925292554651334 Time: 0.03032 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_interior_c32_nn_v1 Tactic: -4208188808979933945 [10/04/2021-21:26:28] [V] [TRT] Tactic: -4208188808979933945 Time: 0.0229 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_nn_v1 Tactic: -3566249366964946311 [10/04/2021-21:26:28] [V] [TRT] Tactic: -3566249366964946311 Time: 0.026828 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_nn_v1 Tactic: -2002418013575043687 [10/04/2021-21:26:28] [V] [TRT] Tactic: -2002418013575043687 Time: 0.030552 [10/04/2021-21:26:28] [V] [TRT] Fastest Tactic: 1713441381477652893 Time: 0.022776 [10/04/2021-21:26:28] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 1713441381477652893 [10/04/2021-21:26:28] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:32,32,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd (CudaDepthwiseConvolution) [10/04/2021-21:26:28] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd (FusedConvActConvolution) [10/04/2021-21:26:28] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd (CudaGroupConvolution) [10/04/2021-21:26:28] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd (CaskConvolution) [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 66319348402778770 [10/04/2021-21:26:28] [V] [TRT] Tactic: 66319348402778770 Time: 0.026004 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 2271687430539765460 [10/04/2021-21:26:28] [V] [TRT] Tactic: 2271687430539765460 Time: 0.031944 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16 Tactic: 2284815435928292401 [10/04/2021-21:26:28] [V] [TRT] Tactic: 2284815435928292401 Time: 0.033744 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16_t1r1s1 Tactic: 2403706865230711816 [10/04/2021-21:26:28] [V] [TRT] Tactic: 2403706865230711816 Time: 0.034032 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: 3342635629009683930 [10/04/2021-21:26:28] [V] [TRT] Tactic: 3342635629009683930 Time: 0.018552 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r1s1 Tactic: 4208725185761175800 [10/04/2021-21:26:28] [V] [TRT] Tactic: 4208725185761175800 Time: 0.022508 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_interior_nt_v1 Tactic: 5754467717466343388 [10/04/2021-21:26:28] [V] [TRT] Tactic: 5754467717466343388 Time: 0.026756 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: 6129427510024568065 [10/04/2021-21:26:28] [V] [TRT] Tactic: 6129427510024568065 Time: 0.018552 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r1s1 Tactic: 6866122311506689057 [10/04/2021-21:26:28] [V] [TRT] Tactic: 6866122311506689057 Time: 0.02624 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16_t1r1s1 Tactic: 7003462296689159880 [10/04/2021-21:26:28] [V] [TRT] Tactic: 7003462296689159880 Time: 0.034148 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16 Tactic: 7038629879025143810 [10/04/2021-21:26:28] [V] [TRT] Tactic: 7038629879025143810 Time: 0.034128 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 7039764449991095921 [10/04/2021-21:26:28] [V] [TRT] Tactic: 7039764449991095921 Time: 0.025956 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_interior_nt_v1 Tactic: 7584772692956718645 [10/04/2021-21:26:28] [V] [TRT] Tactic: 7584772692956718645 Time: 0.034544 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -9134855779557081787 [10/04/2021-21:26:28] [V] [TRT] Tactic: -9134855779557081787 Time: 0.022512 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -9114895246540757312 [10/04/2021-21:26:28] [V] [TRT] Tactic: -9114895246540757312 Time: 0.02664 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8945506186161066102 [10/04/2021-21:26:28] [V] [TRT] Tactic: -8945506186161066102 Time: 0.022616 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8707098593641355108 [10/04/2021-21:26:28] [V] [TRT] Tactic: -8707098593641355108 Time: 0.029048 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8343122771093605666 [10/04/2021-21:26:28] [V] [TRT] Tactic: -8343122771093605666 Time: 0.018532 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r1s1 Tactic: -7979528930672358310 [10/04/2021-21:26:28] [V] [TRT] Tactic: -7979528930672358310 Time: 0.018508 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r1s1 Tactic: -7743660625342027105 [10/04/2021-21:26:28] [V] [TRT] Tactic: -7743660625342027105 Time: 0.022092 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -7274936339335021260 [10/04/2021-21:26:28] [V] [TRT] Tactic: -7274936339335021260 Time: 0.026748 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16_t1r1s1 Tactic: -5081515910504910274 [10/04/2021-21:26:28] [V] [TRT] Tactic: -5081515910504910274 Time: 0.026572 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r1s1 Tactic: -4693009430365516309 [10/04/2021-21:26:28] [V] [TRT] Tactic: -4693009430365516309 Time: 0.015704 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r1s1 Tactic: -3726322024058434766 [10/04/2021-21:26:28] [V] [TRT] Tactic: -3726322024058434766 Time: 0.018288 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -2102888629196925141 [10/04/2021-21:26:28] [V] [TRT] Tactic: -2102888629196925141 Time: 0.02598 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r1s1 Tactic: -2029608708402886013 [10/04/2021-21:26:28] [V] [TRT] Tactic: -2029608708402886013 Time: 0.018612 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16 Tactic: -1832766392358096151 [10/04/2021-21:26:28] [V] [TRT] Tactic: -1832766392358096151 Time: 0.02992 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_interior_nt_v1 Tactic: -1383447415429797909 [10/04/2021-21:26:28] [V] [TRT] Tactic: -1383447415429797909 Time: 0.02682 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_interior_nt_v1 Tactic: -743032628982127825 [10/04/2021-21:26:28] [V] [TRT] Tactic: -743032628982127825 Time: 0.026772 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -674235064782459186 [10/04/2021-21:26:28] [V] [TRT] Tactic: -674235064782459186 Time: 0.03404 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: -629322288573675003 [10/04/2021-21:26:28] [V] [TRT] Tactic: -629322288573675003 Time: 0.021152 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -182858804213663094 [10/04/2021-21:26:28] [V] [TRT] Tactic: -182858804213663094 Time: 0.026448 [10/04/2021-21:26:28] [V] [TRT] Fastest Tactic: -4693009430365516309 Time: 0.015704 [10/04/2021-21:26:28] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -4693009430365516309 [10/04/2021-21:26:28] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:28] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:28] [V] [TRT] Tactic: 1002 Time: 0.02214 [10/04/2021-21:26:28] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:28] [V] [TRT] Tactic: 0 Time: 0.010364 [10/04/2021-21:26:28] [V] [TRT] Fastest Tactic: 0 Time: 0.010364 [10/04/2021-21:26:28] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:28] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:28] [V] [TRT] Tactic: 1002 Time: 0.02224 [10/04/2021-21:26:28] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:28] [V] [TRT] Tactic: 0 Time: 0.010476 [10/04/2021-21:26:28] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:28] [V] [TRT] Tactic: 1 Time: 0.010588 [10/04/2021-21:26:28] [V] [TRT] Fastest Tactic: 0 Time: 0.010476 [10/04/2021-21:26:28] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:28] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:28] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:28] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu (FusedConvActConvolution) [10/04/2021-21:26:28] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu (CaskConvolution) [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 4438325421691896755 [10/04/2021-21:26:28] [V] [TRT] Tactic: 4438325421691896755 Time: 0.059944 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: 4581732244273465060 [10/04/2021-21:26:28] [V] [TRT] Tactic: 4581732244273465060 Time: 0.054024 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 4934335053031119367 [10/04/2021-21:26:28] [V] [TRT] Tactic: 4934335053031119367 Time: 0.065512 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6797040896965118050 [10/04/2021-21:26:28] [V] [TRT] Tactic: 6797040896965118050 Time: 0.077312 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 8006952294591770973 [10/04/2021-21:26:28] [V] [TRT] Tactic: 8006952294591770973 Time: 0.069276 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -7210942453088153035 [10/04/2021-21:26:28] [V] [TRT] Tactic: -7210942453088153035 Time: 0.07652 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: -6282183216199417697 [10/04/2021-21:26:28] [V] [TRT] Tactic: -6282183216199417697 Time: 0.057212 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_nn_v1 Tactic: -5026383765466876607 [10/04/2021-21:26:28] [V] [TRT] Tactic: -5026383765466876607 Time: 0.079628 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: -5016725782072253841 [10/04/2021-21:26:28] [V] [TRT] Tactic: -5016725782072253841 Time: 0.057552 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_nn_v1 Tactic: -1370999262391786833 [10/04/2021-21:26:28] [V] [TRT] Tactic: -1370999262391786833 Time: 0.061752 [10/04/2021-21:26:28] [V] [TRT] Fastest Tactic: 4581732244273465060 Time: 0.054024 [10/04/2021-21:26:28] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 4581732244273465060 [10/04/2021-21:26:28] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu (CaskConvolution) [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_nn_v1 Tactic: 1213457772632185722 [10/04/2021-21:26:28] [V] [TRT] Tactic: 1213457772632185722 Time: 0.063724 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_nn_v1 Tactic: 1713441381477652893 [10/04/2021-21:26:28] [V] [TRT] Tactic: 1713441381477652893 Time: 0.065376 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_c32_nn_v1 Tactic: 7125598890155666458 [10/04/2021-21:26:28] [V] [TRT] Tactic: 7125598890155666458 Time: 0.057728 [10/04/2021-21:26:28] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_nn_v1 Tactic: 8047041638267142825 [10/04/2021-21:26:29] [V] [TRT] Tactic: 8047041638267142825 Time: 0.055304 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_nn_v1 Tactic: -7846982807478255793 [10/04/2021-21:26:29] [V] [TRT] Tactic: -7846982807478255793 Time: 0.053212 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_nn_v1 Tactic: -6459719113600909000 [10/04/2021-21:26:29] [V] [TRT] Tactic: -6459719113600909000 Time: 0.05228 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_nn_v1 Tactic: -4573925292554651334 [10/04/2021-21:26:29] [V] [TRT] Tactic: -4573925292554651334 Time: 0.072536 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_nn_v1 Tactic: -3566249366964946311 [10/04/2021-21:26:29] [V] [TRT] Tactic: -3566249366964946311 Time: 0.061412 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_nn_v1 Tactic: -2002418013575043687 [10/04/2021-21:26:29] [V] [TRT] Tactic: -2002418013575043687 Time: 0.076948 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_c32_nn_v1 Tactic: -1659631603542281459 [10/04/2021-21:26:29] [V] [TRT] Tactic: -1659631603542281459 Time: 0.077016 [10/04/2021-21:26:29] [V] [TRT] Fastest Tactic: -6459719113600909000 Time: 0.05228 [10/04/2021-21:26:29] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6459719113600909000 [10/04/2021-21:26:29] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:29] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:29] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:29] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu (FusedConvActConvolution) [10/04/2021-21:26:29] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:29] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu (CudaGroupConvolution) [10/04/2021-21:26:29] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:29] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu (CaskConvolution) [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 66319348402778770 [10/04/2021-21:26:29] [V] [TRT] Tactic: 66319348402778770 Time: 0.042816 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_large_nt_v1 Tactic: 1931698692231796048 [10/04/2021-21:26:29] [V] [TRT] Tactic: 1931698692231796048 Time: 0.048364 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2004366221877065623 [10/04/2021-21:26:29] [V] [TRT] Tactic: 2004366221877065623 Time: 0.02402 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2169338034361422162 [10/04/2021-21:26:29] [V] [TRT] Tactic: 2169338034361422162 Time: 0.034164 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 2271687430539765460 [10/04/2021-21:26:29] [V] [TRT] Tactic: 2271687430539765460 Time: 0.061416 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16 Tactic: 2284815435928292401 [10/04/2021-21:26:29] [V] [TRT] Tactic: 2284815435928292401 Time: 0.057204 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: 3342635629009683930 [10/04/2021-21:26:29] [V] [TRT] Tactic: 3342635629009683930 Time: 0.023512 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3740557186499054067 [10/04/2021-21:26:29] [V] [TRT] Tactic: 3740557186499054067 Time: 0.041812 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:29] [V] [TRT] Tactic: 3768633326807889446 Time: 0.019792 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:29] [V] [TRT] Tactic: 5105539492142133503 Time: 0.022384 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: 6129427510024568065 [10/04/2021-21:26:29] [V] [TRT] Tactic: 6129427510024568065 Time: 0.022452 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16 Tactic: 7038629879025143810 [10/04/2021-21:26:29] [V] [TRT] Tactic: 7038629879025143810 Time: 0.056976 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 7039764449991095921 [10/04/2021-21:26:29] [V] [TRT] Tactic: 7039764449991095921 Time: 0.04602 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:29] [V] [TRT] Tactic: 7585914864117166414 Time: 0.026216 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -9134855779557081787 [10/04/2021-21:26:29] [V] [TRT] Tactic: -9134855779557081787 Time: 0.034068 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -9114895246540757312 [10/04/2021-21:26:29] [V] [TRT] Tactic: -9114895246540757312 Time: 0.046052 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8945506186161066102 [10/04/2021-21:26:29] [V] [TRT] Tactic: -8945506186161066102 Time: 0.034152 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_large_nt_v1 Tactic: -8787970778927801941 [10/04/2021-21:26:29] [V] [TRT] Tactic: -8787970778927801941 Time: 0.04608 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8707098593641355108 [10/04/2021-21:26:29] [V] [TRT] Tactic: -8707098593641355108 Time: 0.045684 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8343122771093605666 [10/04/2021-21:26:29] [V] [TRT] Tactic: -8343122771093605666 Time: 0.026376 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_large_nt_v1 Tactic: -8225786209923559953 [10/04/2021-21:26:29] [V] [TRT] Tactic: -8225786209923559953 Time: 0.061548 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_large_nt_v1 Tactic: -7373087278866484214 [10/04/2021-21:26:29] [V] [TRT] Tactic: -7373087278866484214 Time: 0.046068 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -7274936339335021260 [10/04/2021-21:26:29] [V] [TRT] Tactic: -7274936339335021260 Time: 0.045892 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16_t1r3s3 Tactic: -6068501086087743547 [10/04/2021-21:26:29] [V] [TRT] Tactic: -6068501086087743547 Time: 0.056196 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -5693797309970869451 [10/04/2021-21:26:29] [V] [TRT] Tactic: -5693797309970869451 Time: 0.033948 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -3036929044958869524 [10/04/2021-21:26:29] [V] [TRT] Tactic: -3036929044958869524 Time: 0.045424 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -2102888629196925141 [10/04/2021-21:26:29] [V] [TRT] Tactic: -2102888629196925141 Time: 0.041944 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16 Tactic: -1832766392358096151 [10/04/2021-21:26:29] [V] [TRT] Tactic: -1832766392358096151 Time: 0.049176 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -1467400415054408443 [10/04/2021-21:26:29] [V] [TRT] Tactic: -1467400415054408443 Time: 0.056944 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -674235064782459186 [10/04/2021-21:26:29] [V] [TRT] Tactic: -674235064782459186 Time: 0.060864 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: -629322288573675003 [10/04/2021-21:26:29] [V] [TRT] Tactic: -629322288573675003 Time: 0.027896 [10/04/2021-21:26:29] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -182858804213663094 [10/04/2021-21:26:29] [V] [TRT] Tactic: -182858804213663094 Time: 0.04284 [10/04/2021-21:26:29] [V] [TRT] Fastest Tactic: 3768633326807889446 Time: 0.019792 [10/04/2021-21:26:29] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 3768633326807889446 [10/04/2021-21:26:29] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:29] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:29] [V] [TRT] Tactic: 1002 Time: 0.022448 [10/04/2021-21:26:29] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:29] [V] [TRT] Tactic: 0 Time: 0.010508 [10/04/2021-21:26:29] [V] [TRT] Fastest Tactic: 0 Time: 0.010508 [10/04/2021-21:26:29] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:29] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:29] [V] [TRT] Tactic: 1002 Time: 0.022112 [10/04/2021-21:26:29] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:29] [V] [TRT] Tactic: 0 Time: 0.01038 [10/04/2021-21:26:29] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:29] [V] [TRT] Tactic: 1 Time: 0.010492 [10/04/2021-21:26:29] [V] [TRT] Fastest Tactic: 0 Time: 0.01038 [10/04/2021-21:26:29] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:29] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:29] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:29] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:29] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:29] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu (FusedConvActConvolution) [10/04/2021-21:26:29] [V] [TRT] Tactic: 524287 Time: 0.039768 [10/04/2021-21:26:29] [V] [TRT] Tactic: 720895 Time: 0.04538 [10/04/2021-21:26:29] [V] [TRT] Tactic: 983039 Time: 0.03016 [10/04/2021-21:26:29] [V] [TRT] Tactic: 1048575 Time: 0.033788 [10/04/2021-21:26:29] [V] [TRT] Tactic: 1703935 Time: 0.024964 [10/04/2021-21:26:29] [V] [TRT] Tactic: 1769471 Time: 0.041096 [10/04/2021-21:26:29] [V] [TRT] Tactic: 1966079 Time: 0.057036 [10/04/2021-21:26:29] [V] [TRT] Tactic: 2031615 Time: 0.049268 [10/04/2021-21:26:29] [V] [TRT] Tactic: 2228223 Time: 0.041564 [10/04/2021-21:26:29] [V] [TRT] Tactic: 2621439 Time: 0.022144 [10/04/2021-21:26:29] [V] [TRT] Tactic: 2752511 Time: 0.039892 [10/04/2021-21:26:29] [V] [TRT] Tactic: 2818047 Time: 0.050572 [10/04/2021-21:26:29] [V] [TRT] Tactic: 2883583 Time: 0.0607 [10/04/2021-21:26:29] [V] [TRT] Tactic: 3014655 Time: 0.029916 [10/04/2021-21:26:29] [V] [TRT] Tactic: 3145727 Time: 0.03406 [10/04/2021-21:26:29] [V] [TRT] Tactic: 3473407 Time: 0.05306 [10/04/2021-21:26:29] [V] [TRT] Tactic: 3604479 Time: 0.029908 [10/04/2021-21:26:29] [V] [TRT] Tactic: 3735551 Time: 0.040836 [10/04/2021-21:26:29] [V] [TRT] Tactic: 4390911 Time: 0.060548 [10/04/2021-21:26:29] [V] [TRT] Tactic: 5046271 Time: 0.028276 [10/04/2021-21:26:29] [V] [TRT] Tactic: 5963775 Time: 0.051532 [10/04/2021-21:26:29] [V] [TRT] Tactic: 6160383 Time: 0.045596 [10/04/2021-21:26:29] [V] [TRT] Tactic: 6488063 Time: 0.045192 [10/04/2021-21:26:29] [V] [TRT] Tactic: 6881279 Time: 0.042576 [10/04/2021-21:26:29] [V] [TRT] Tactic: 7274495 Time: 0.033064 [10/04/2021-21:26:29] [V] [TRT] Tactic: 7864319 Time: 0.025992 [10/04/2021-21:26:29] [V] [TRT] Tactic: 7995391 Time: 0.037888 [10/04/2021-21:26:29] [V] [TRT] Tactic: 8585215 Time: 0.045244 [10/04/2021-21:26:29] [V] [TRT] Tactic: 8847359 Time: 0.0362 [10/04/2021-21:26:29] [V] [TRT] Tactic: 8978431 Time: 0.050672 [10/04/2021-21:26:29] [V] [TRT] Tactic: 9043967 Time: 0.029912 [10/04/2021-21:26:29] [V] [TRT] Tactic: 9175039 Time: 0.030316 [10/04/2021-21:26:29] [V] [TRT] Tactic: 9502719 Time: 0.0609 [10/04/2021-21:26:29] [V] [TRT] Tactic: 9830399 Time: 0.045496 [10/04/2021-21:26:29] [V] [TRT] Tactic: 10027007 Time: 0.0356 [10/04/2021-21:26:29] [V] [TRT] Tactic: 10092543 Time: 0.060556 [10/04/2021-21:26:30] [V] [TRT] Tactic: 10289151 Time: 0.05668 [10/04/2021-21:26:30] [V] [TRT] Tactic: 10485759 Time: 0.026068 [10/04/2021-21:26:30] [V] [TRT] Tactic: 10682367 Time: 0.022364 [10/04/2021-21:26:30] [V] [TRT] Tactic: 10813439 Time: 0.025852 [10/04/2021-21:26:30] [V] [TRT] Fastest Tactic: 2621439 Time: 0.022144 [10/04/2021-21:26:30] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu (CaskConvolution) [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 4438325421691896755 [10/04/2021-21:26:30] [V] [TRT] Tactic: 4438325421691896755 Time: 0.05298 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: 4581732244273465060 [10/04/2021-21:26:30] [V] [TRT] Tactic: 4581732244273465060 Time: 0.049804 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 4934335053031119367 [10/04/2021-21:26:30] [V] [TRT] Tactic: 4934335053031119367 Time: 0.061556 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6797040896965118050 [10/04/2021-21:26:30] [V] [TRT] Tactic: 6797040896965118050 Time: 0.07318 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 8006952294591770973 [10/04/2021-21:26:30] [V] [TRT] Tactic: 8006952294591770973 Time: 0.061432 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -7210942453088153035 [10/04/2021-21:26:30] [V] [TRT] Tactic: -7210942453088153035 Time: 0.068808 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: -6282183216199417697 [10/04/2021-21:26:30] [V] [TRT] Tactic: -6282183216199417697 Time: 0.050792 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_nn_v1 Tactic: -5026383765466876607 [10/04/2021-21:26:30] [V] [TRT] Tactic: -5026383765466876607 Time: 0.073172 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: -5016725782072253841 [10/04/2021-21:26:30] [V] [TRT] Tactic: -5016725782072253841 Time: 0.049432 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_nn_v1 Tactic: -1370999262391786833 [10/04/2021-21:26:30] [V] [TRT] Tactic: -1370999262391786833 Time: 0.05414 [10/04/2021-21:26:30] [V] [TRT] Fastest Tactic: -5016725782072253841 Time: 0.049432 [10/04/2021-21:26:30] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: FusedConvActConvolution Tactic: 2621439 [10/04/2021-21:26:30] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:30] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu (CaskConvolution) [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_nn_v1 Tactic: 1213457772632185722 [10/04/2021-21:26:30] [V] [TRT] Tactic: 1213457772632185722 Time: 0.057488 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_nn_v1 Tactic: 1713441381477652893 [10/04/2021-21:26:30] [V] [TRT] Tactic: 1713441381477652893 Time: 0.0615 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_c32_nn_v1 Tactic: 7125598890155666458 [10/04/2021-21:26:30] [V] [TRT] Tactic: 7125598890155666458 Time: 0.053992 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_nn_v1 Tactic: 8047041638267142825 [10/04/2021-21:26:30] [V] [TRT] Tactic: 8047041638267142825 Time: 0.049524 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_nn_v1 Tactic: -7846982807478255793 [10/04/2021-21:26:30] [V] [TRT] Tactic: -7846982807478255793 Time: 0.04862 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_nn_v1 Tactic: -6459719113600909000 [10/04/2021-21:26:30] [V] [TRT] Tactic: -6459719113600909000 Time: 0.048496 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_nn_v1 Tactic: -4573925292554651334 [10/04/2021-21:26:30] [V] [TRT] Tactic: -4573925292554651334 Time: 0.068204 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_nn_v1 Tactic: -3566249366964946311 [10/04/2021-21:26:30] [V] [TRT] Tactic: -3566249366964946311 Time: 0.054012 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_nn_v1 Tactic: -2002418013575043687 [10/04/2021-21:26:30] [V] [TRT] Tactic: -2002418013575043687 Time: 0.071932 [10/04/2021-21:26:30] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_c32_nn_v1 Tactic: -1659631603542281459 [10/04/2021-21:26:30] [V] [TRT] Tactic: -1659631603542281459 Time: 0.072584 [10/04/2021-21:26:30] [V] [TRT] Fastest Tactic: -6459719113600909000 Time: 0.048496 [10/04/2021-21:26:30] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6459719113600909000 [10/04/2021-21:26:30] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:30] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:30] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:30] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu (FusedConvActConvolution) [10/04/2021-21:26:30] [V] [TRT] Tactic: 524287 Time: 0.040828 [10/04/2021-21:26:30] [V] [TRT] Tactic: 720895 Time: 0.046012 [10/04/2021-21:26:30] [V] [TRT] Tactic: 983039 Time: 0.0292 [10/04/2021-21:26:30] [V] [TRT] Tactic: 1048575 Time: 0.033248 [10/04/2021-21:26:30] [V] [TRT] Tactic: 1703935 Time: 0.0252 [10/04/2021-21:26:30] [V] [TRT] Tactic: 1769471 Time: 0.03318 [10/04/2021-21:26:30] [V] [TRT] Tactic: 1966079 Time: 0.055408 [10/04/2021-21:26:30] [V] [TRT] Tactic: 2031615 Time: 0.04834 [10/04/2021-21:26:30] [V] [TRT] Tactic: 2228223 Time: 0.044408 [10/04/2021-21:26:30] [V] [TRT] Tactic: 2621439 Time: 0.021888 [10/04/2021-21:26:30] [V] [TRT] Tactic: 2752511 Time: 0.051956 [10/04/2021-21:26:30] [V] [TRT] Tactic: 2818047 Time: 0.052056 [10/04/2021-21:26:30] [V] [TRT] Tactic: 2883583 Time: 0.061956 [10/04/2021-21:26:30] [V] [TRT] Tactic: 3014655 Time: 0.029 [10/04/2021-21:26:30] [V] [TRT] Tactic: 3145727 Time: 0.033132 [10/04/2021-21:26:30] [V] [TRT] Tactic: 3473407 Time: 0.054216 [10/04/2021-21:26:30] [V] [TRT] Tactic: 3604479 Time: 0.02928 [10/04/2021-21:26:30] [V] [TRT] Tactic: 3735551 Time: 0.033096 [10/04/2021-21:26:30] [V] [TRT] Tactic: 4390911 Time: 0.05936 [10/04/2021-21:26:30] [V] [TRT] Tactic: 5046271 Time: 0.033108 [10/04/2021-21:26:30] [V] [TRT] Tactic: 5963775 Time: 0.052076 [10/04/2021-21:26:30] [V] [TRT] Tactic: 6160383 Time: 0.044648 [10/04/2021-21:26:30] [V] [TRT] Tactic: 6488063 Time: 0.037076 [10/04/2021-21:26:30] [V] [TRT] Tactic: 6881279 Time: 0.043684 [10/04/2021-21:26:30] [V] [TRT] Tactic: 7274495 Time: 0.03288 [10/04/2021-21:26:30] [V] [TRT] Tactic: 7864319 Time: 0.025828 [10/04/2021-21:26:30] [V] [TRT] Tactic: 7995391 Time: 0.03698 [10/04/2021-21:26:30] [V] [TRT] Tactic: 8585215 Time: 0.044612 [10/04/2021-21:26:30] [V] [TRT] Tactic: 8847359 Time: 0.029444 [10/04/2021-21:26:30] [V] [TRT] Tactic: 8978431 Time: 0.052072 [10/04/2021-21:26:30] [V] [TRT] Tactic: 9043967 Time: 0.029704 [10/04/2021-21:26:30] [V] [TRT] Tactic: 9175039 Time: 0.029408 [10/04/2021-21:26:30] [V] [TRT] Tactic: 9502719 Time: 0.059616 [10/04/2021-21:26:30] [V] [TRT] Tactic: 9830399 Time: 0.045804 [10/04/2021-21:26:30] [V] [TRT] Tactic: 10027007 Time: 0.037096 [10/04/2021-21:26:30] [V] [TRT] Tactic: 10092543 Time: 0.0596 [10/04/2021-21:26:30] [V] [TRT] Tactic: 10289151 Time: 0.05584 [10/04/2021-21:26:30] [V] [TRT] Tactic: 10485759 Time: 0.025712 [10/04/2021-21:26:30] [V] [TRT] Tactic: 10682367 Time: 0.021968 [10/04/2021-21:26:30] [V] [TRT] Tactic: 10813439 Time: 0.025716 [10/04/2021-21:26:30] [V] [TRT] Fastest Tactic: 2621439 Time: 0.021888 [10/04/2021-21:26:30] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu (CudaGroupConvolution) [10/04/2021-21:26:30] [V] [TRT] Tactic: 0 Time: 0.219256 [10/04/2021-21:26:30] [V] [TRT] Tactic: 1 Time: 0.361088 [10/04/2021-21:26:30] [V] [TRT] Tactic: 2 Time: 0.607728 [10/04/2021-21:26:30] [V] [TRT] Tactic: 3 Time: 1.15939 [10/04/2021-21:26:30] [V] [TRT] Tactic: 4 Time: 2.2631 [10/04/2021-21:26:31] [V] [TRT] Tactic: 5 Time: 4.46867 [10/04/2021-21:26:31] [V] [TRT] Tactic: 6 Time: 0.19994 [10/04/2021-21:26:31] [V] [TRT] Tactic: 7 Time: 0.3411 [10/04/2021-21:26:31] [V] [TRT] Tactic: 8 Time: 0.584296 [10/04/2021-21:26:31] [V] [TRT] Tactic: 9 Time: 1.13572 [10/04/2021-21:26:31] [V] [TRT] Tactic: 10 Time: 2.23933 [10/04/2021-21:26:31] [V] [TRT] Tactic: 11 Time: 4.44549 [10/04/2021-21:26:31] [V] [TRT] Tactic: 12 Time: 0.189896 [10/04/2021-21:26:31] [V] [TRT] Tactic: 13 Time: 0.330872 [10/04/2021-21:26:31] [V] [TRT] Tactic: 14 Time: 0.574952 [10/04/2021-21:26:31] [V] [TRT] Tactic: 15 Time: 1.1252 [10/04/2021-21:26:31] [V] [TRT] Tactic: 16 Time: 2.22701 [10/04/2021-21:26:31] [V] [TRT] Tactic: 17 Time: 4.43079 [10/04/2021-21:26:31] [V] [TRT] Tactic: 18 Time: 0.134524 [10/04/2021-21:26:31] [V] [TRT] Tactic: 19 Time: 0.219216 [10/04/2021-21:26:31] [V] [TRT] Tactic: 20 Time: 0.36464 [10/04/2021-21:26:31] [V] [TRT] Tactic: 21 Time: 0.615888 [10/04/2021-21:26:31] [V] [TRT] Tactic: 22 Time: 1.17886 [10/04/2021-21:26:31] [V] [TRT] Tactic: 23 Time: 2.30426 [10/04/2021-21:26:31] [V] [TRT] Tactic: 24 Time: 4.55514 [10/04/2021-21:26:31] [V] [TRT] Tactic: 25 Time: 0.113744 [10/04/2021-21:26:31] [V] [TRT] Tactic: 26 Time: 0.200936 [10/04/2021-21:26:31] [V] [TRT] Tactic: 27 Time: 0.345468 [10/04/2021-21:26:31] [V] [TRT] Tactic: 28 Time: 0.593916 [10/04/2021-21:26:31] [V] [TRT] Tactic: 29 Time: 1.15504 [10/04/2021-21:26:31] [V] [TRT] Tactic: 30 Time: 2.28019 [10/04/2021-21:26:31] [V] [TRT] Tactic: 31 Time: 4.53118 [10/04/2021-21:26:31] [V] [TRT] Tactic: 32 Time: 0.106756 [10/04/2021-21:26:31] [V] [TRT] Tactic: 33 Time: 0.154856 [10/04/2021-21:26:31] [V] [TRT] Tactic: 34 Time: 0.270844 [10/04/2021-21:26:31] [V] [TRT] Tactic: 35 Time: 0.470952 [10/04/2021-21:26:31] [V] [TRT] Tactic: 36 Time: 0.922324 [10/04/2021-21:26:31] [V] [TRT] Tactic: 37 Time: 1.82976 [10/04/2021-21:26:31] [V] [TRT] Tactic: 38 Time: 2.94827 [10/04/2021-21:26:31] [V] [TRT] Tactic: 39 Time: 0.03854 [10/04/2021-21:26:31] [V] [TRT] Tactic: 40 Time: 0.0556 [10/04/2021-21:26:31] [V] [TRT] Tactic: 41 Time: 0.091544 [10/04/2021-21:26:31] [V] [TRT] Tactic: 42 Time: 0.149952 [10/04/2021-21:26:31] [V] [TRT] Tactic: 43 Time: 0.255624 [10/04/2021-21:26:32] [V] [TRT] Tactic: 44 Time: 0.746176 [10/04/2021-21:26:32] [V] [TRT] Tactic: 45 Time: 1.45312 [10/04/2021-21:26:32] [V] [TRT] Tactic: 46 Time: 2.87039 [10/04/2021-21:26:32] [V] [TRT] Tactic: 47 Time: 0.054336 [10/04/2021-21:26:32] [V] [TRT] Tactic: 48 Time: 0.05958 [10/04/2021-21:26:32] [V] [TRT] Tactic: 49 Time: 0.102544 [10/04/2021-21:26:32] [V] [TRT] Tactic: 50 Time: 0.176836 [10/04/2021-21:26:32] [V] [TRT] Tactic: 51 Time: 0.307284 [10/04/2021-21:26:32] [V] [TRT] Tactic: 52 Time: 0.596068 [10/04/2021-21:26:32] [V] [TRT] Tactic: 53 Time: 1.17343 [10/04/2021-21:26:32] [V] [TRT] Tactic: 54 Time: 2.26351 [10/04/2021-21:26:32] [V] [TRT] Tactic: 55 Time: 0.037844 [10/04/2021-21:26:32] [V] [TRT] Tactic: 56 Time: 0.050616 [10/04/2021-21:26:32] [V] [TRT] Tactic: 57 Time: 0.091156 [10/04/2021-21:26:32] [V] [TRT] Tactic: 58 Time: 0.157396 [10/04/2021-21:26:32] [V] [TRT] Tactic: 59 Time: 0.276516 [10/04/2021-21:26:32] [V] [TRT] Tactic: 60 Time: 0.540368 [10/04/2021-21:26:32] [V] [TRT] Tactic: 61 Time: 1.06759 [10/04/2021-21:26:32] [V] [TRT] Tactic: 62 Time: 2.12198 [10/04/2021-21:26:32] [V] [TRT] Fastest Tactic: 55 Time: 0.037844 [10/04/2021-21:26:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu (CaskConvolution) [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 66319348402778770 [10/04/2021-21:26:32] [V] [TRT] Tactic: 66319348402778770 Time: 0.019076 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_large_nt_v1 Tactic: 1931698692231796048 [10/04/2021-21:26:32] [V] [TRT] Tactic: 1931698692231796048 Time: 0.020092 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2004366221877065623 [10/04/2021-21:26:32] [V] [TRT] Tactic: 2004366221877065623 Time: 0.011636 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2169338034361422162 [10/04/2021-21:26:32] [V] [TRT] Tactic: 2169338034361422162 Time: 0.015212 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 2271687430539765460 [10/04/2021-21:26:32] [V] [TRT] Tactic: 2271687430539765460 Time: 0.025436 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16 Tactic: 2284815435928292401 [10/04/2021-21:26:32] [V] [TRT] Tactic: 2284815435928292401 Time: 0.026716 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: 3342635629009683930 [10/04/2021-21:26:32] [V] [TRT] Tactic: 3342635629009683930 Time: 0.009512 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3740557186499054067 [10/04/2021-21:26:32] [V] [TRT] Tactic: 3740557186499054067 Time: 0.015644 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:32] [V] [TRT] Tactic: 3768633326807889446 Time: 0.008688 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:32] [V] [TRT] Tactic: 5105539492142133503 Time: 0.008696 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: 6129427510024568065 [10/04/2021-21:26:32] [V] [TRT] Tactic: 6129427510024568065 Time: 0.009696 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16 Tactic: 7038629879025143810 [10/04/2021-21:26:32] [V] [TRT] Tactic: 7038629879025143810 Time: 0.022764 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 7039764449991095921 [10/04/2021-21:26:32] [V] [TRT] Tactic: 7039764449991095921 Time: 0.01702 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:32] [V] [TRT] Tactic: 7585914864117166414 Time: 0.010404 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -9134855779557081787 [10/04/2021-21:26:32] [V] [TRT] Tactic: -9134855779557081787 Time: 0.0139 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -9114895246540757312 [10/04/2021-21:26:32] [V] [TRT] Tactic: -9114895246540757312 Time: 0.017496 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8945506186161066102 [10/04/2021-21:26:32] [V] [TRT] Tactic: -8945506186161066102 Time: 0.01396 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_large_nt_v1 Tactic: -8787970778927801941 [10/04/2021-21:26:32] [V] [TRT] Tactic: -8787970778927801941 Time: 0.017556 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8707098593641355108 [10/04/2021-21:26:32] [V] [TRT] Tactic: -8707098593641355108 Time: 0.017452 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8343122771093605666 [10/04/2021-21:26:32] [V] [TRT] Tactic: -8343122771093605666 Time: 0.010484 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_large_nt_v1 Tactic: -8225786209923559953 [10/04/2021-21:26:32] [V] [TRT] Tactic: -8225786209923559953 Time: 0.023116 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_large_nt_v1 Tactic: -7373087278866484214 [10/04/2021-21:26:32] [V] [TRT] Tactic: -7373087278866484214 Time: 0.017792 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -7274936339335021260 [10/04/2021-21:26:32] [V] [TRT] Tactic: -7274936339335021260 Time: 0.017552 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16_t1r3s3 Tactic: -6068501086087743547 [10/04/2021-21:26:32] [V] [TRT] Tactic: -6068501086087743547 Time: 0.022844 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -5693797309970869451 [10/04/2021-21:26:32] [V] [TRT] Tactic: -5693797309970869451 Time: 0.01224 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -3036929044958869524 [10/04/2021-21:26:32] [V] [TRT] Tactic: -3036929044958869524 Time: 0.017456 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -2102888629196925141 [10/04/2021-21:26:32] [V] [TRT] Tactic: -2102888629196925141 Time: 0.0157 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16 Tactic: -1832766392358096151 [10/04/2021-21:26:32] [V] [TRT] Tactic: -1832766392358096151 Time: 0.019156 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -1467400415054408443 [10/04/2021-21:26:32] [V] [TRT] Tactic: -1467400415054408443 Time: 0.022732 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -674235064782459186 [10/04/2021-21:26:32] [V] [TRT] Tactic: -674235064782459186 Time: 0.022696 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: -629322288573675003 [10/04/2021-21:26:32] [V] [TRT] Tactic: -629322288573675003 Time: 0.011804 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -182858804213663094 [10/04/2021-21:26:32] [V] [TRT] Tactic: -182858804213663094 Time: 0.015696 [10/04/2021-21:26:32] [V] [TRT] Fastest Tactic: 3768633326807889446 Time: 0.008688 [10/04/2021-21:26:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 3768633326807889446 [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1), Int8(256,256:32,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:32] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu (CaskConvolution) [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 4438325421691896755 [10/04/2021-21:26:32] [V] [TRT] Tactic: 4438325421691896755 Time: 0.02104 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: 4581732244273465060 [10/04/2021-21:26:32] [V] [TRT] Tactic: 4581732244273465060 Time: 0.01918 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 4934335053031119367 [10/04/2021-21:26:32] [V] [TRT] Tactic: 4934335053031119367 Time: 0.022736 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6797040896965118050 [10/04/2021-21:26:32] [V] [TRT] Tactic: 6797040896965118050 Time: 0.028144 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 8006952294591770973 [10/04/2021-21:26:32] [V] [TRT] Tactic: 8006952294591770973 Time: 0.023756 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -7210942453088153035 [10/04/2021-21:26:32] [V] [TRT] Tactic: -7210942453088153035 Time: 0.028152 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: -6282183216199417697 [10/04/2021-21:26:32] [V] [TRT] Tactic: -6282183216199417697 Time: 0.020972 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_nn_v1 Tactic: -5026383765466876607 [10/04/2021-21:26:32] [V] [TRT] Tactic: -5026383765466876607 Time: 0.028512 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: -5016725782072253841 [10/04/2021-21:26:32] [V] [TRT] Tactic: -5016725782072253841 Time: 0.021036 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_nn_v1 Tactic: -1370999262391786833 [10/04/2021-21:26:32] [V] [TRT] Tactic: -1370999262391786833 Time: 0.02134 [10/04/2021-21:26:32] [V] [TRT] Fastest Tactic: 4581732244273465060 Time: 0.01918 [10/04/2021-21:26:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 4581732244273465060 [10/04/2021-21:26:32] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu (CaskConvolution) [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_nn_v1 Tactic: 1213457772632185722 [10/04/2021-21:26:32] [V] [TRT] Tactic: 1213457772632185722 Time: 0.022636 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_nn_v1 Tactic: 1713441381477652893 [10/04/2021-21:26:32] [V] [TRT] Tactic: 1713441381477652893 Time: 0.02294 [10/04/2021-21:26:32] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_c32_nn_v1 Tactic: 7125598890155666458 [10/04/2021-21:26:33] [V] [TRT] Tactic: 7125598890155666458 Time: 0.021412 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_nn_v1 Tactic: 8047041638267142825 [10/04/2021-21:26:33] [V] [TRT] Tactic: 8047041638267142825 Time: 0.02104 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_nn_v1 Tactic: -7846982807478255793 [10/04/2021-21:26:33] [V] [TRT] Tactic: -7846982807478255793 Time: 0.01922 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_nn_v1 Tactic: -6459719113600909000 [10/04/2021-21:26:33] [V] [TRT] Tactic: -6459719113600909000 Time: 0.020952 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_nn_v1 Tactic: -4573925292554651334 [10/04/2021-21:26:33] [V] [TRT] Tactic: -4573925292554651334 Time: 0.028004 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_nn_v1 Tactic: -3566249366964946311 [10/04/2021-21:26:33] [V] [TRT] Tactic: -3566249366964946311 Time: 0.021044 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_nn_v1 Tactic: -2002418013575043687 [10/04/2021-21:26:33] [V] [TRT] Tactic: -2002418013575043687 Time: 0.02808 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_c32_nn_v1 Tactic: -1659631603542281459 [10/04/2021-21:26:33] [V] [TRT] Tactic: -1659631603542281459 Time: 0.028396 [10/04/2021-21:26:33] [V] [TRT] Fastest Tactic: -7846982807478255793 Time: 0.01922 [10/04/2021-21:26:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7846982807478255793 [10/04/2021-21:26:33] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:33] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu (CudaGroupConvolution) [10/04/2021-21:26:33] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu (CaskConvolution) [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 66319348402778770 [10/04/2021-21:26:33] [V] [TRT] Tactic: 66319348402778770 Time: 0.015724 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_large_nt_v1 Tactic: 1931698692231796048 [10/04/2021-21:26:33] [V] [TRT] Tactic: 1931698692231796048 Time: 0.017632 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2004366221877065623 [10/04/2021-21:26:33] [V] [TRT] Tactic: 2004366221877065623 Time: 0.009656 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2169338034361422162 [10/04/2021-21:26:33] [V] [TRT] Tactic: 2169338034361422162 Time: 0.012212 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 2271687430539765460 [10/04/2021-21:26:33] [V] [TRT] Tactic: 2271687430539765460 Time: 0.022752 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16 Tactic: 2284815435928292401 [10/04/2021-21:26:33] [V] [TRT] Tactic: 2284815435928292401 Time: 0.022824 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: 3342635629009683930 [10/04/2021-21:26:33] [V] [TRT] Tactic: 3342635629009683930 Time: 0.009908 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3740557186499054067 [10/04/2021-21:26:33] [V] [TRT] Tactic: 3740557186499054067 Time: 0.015732 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:33] [V] [TRT] Tactic: 3768633326807889446 Time: 0.008724 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:33] [V] [TRT] Tactic: 5105539492142133503 Time: 0.00868 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: 6129427510024568065 [10/04/2021-21:26:33] [V] [TRT] Tactic: 6129427510024568065 Time: 0.009932 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16 Tactic: 7038629879025143810 [10/04/2021-21:26:33] [V] [TRT] Tactic: 7038629879025143810 Time: 0.022696 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 7039764449991095921 [10/04/2021-21:26:33] [V] [TRT] Tactic: 7039764449991095921 Time: 0.01596 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:33] [V] [TRT] Tactic: 7585914864117166414 Time: 0.010488 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -9134855779557081787 [10/04/2021-21:26:33] [V] [TRT] Tactic: -9134855779557081787 Time: 0.013104 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -9114895246540757312 [10/04/2021-21:26:33] [V] [TRT] Tactic: -9114895246540757312 Time: 0.016008 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8945506186161066102 [10/04/2021-21:26:33] [V] [TRT] Tactic: -8945506186161066102 Time: 0.012172 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_large_nt_v1 Tactic: -8787970778927801941 [10/04/2021-21:26:33] [V] [TRT] Tactic: -8787970778927801941 Time: 0.016256 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8707098593641355108 [10/04/2021-21:26:33] [V] [TRT] Tactic: -8707098593641355108 Time: 0.017548 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8343122771093605666 [10/04/2021-21:26:33] [V] [TRT] Tactic: -8343122771093605666 Time: 0.010436 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_large_nt_v1 Tactic: -8225786209923559953 [10/04/2021-21:26:33] [V] [TRT] Tactic: -8225786209923559953 Time: 0.02316 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_large_nt_v1 Tactic: -7373087278866484214 [10/04/2021-21:26:33] [V] [TRT] Tactic: -7373087278866484214 Time: 0.017224 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -7274936339335021260 [10/04/2021-21:26:33] [V] [TRT] Tactic: -7274936339335021260 Time: 0.017524 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16_t1r3s3 Tactic: -6068501086087743547 [10/04/2021-21:26:33] [V] [TRT] Tactic: -6068501086087743547 Time: 0.021904 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -5693797309970869451 [10/04/2021-21:26:33] [V] [TRT] Tactic: -5693797309970869451 Time: 0.01216 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -3036929044958869524 [10/04/2021-21:26:33] [V] [TRT] Tactic: -3036929044958869524 Time: 0.015972 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -2102888629196925141 [10/04/2021-21:26:33] [V] [TRT] Tactic: -2102888629196925141 Time: 0.01574 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16 Tactic: -1832766392358096151 [10/04/2021-21:26:33] [V] [TRT] Tactic: -1832766392358096151 Time: 0.019156 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -1467400415054408443 [10/04/2021-21:26:33] [V] [TRT] Tactic: -1467400415054408443 Time: 0.022748 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -674235064782459186 [10/04/2021-21:26:33] [V] [TRT] Tactic: -674235064782459186 Time: 0.022716 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: -629322288573675003 [10/04/2021-21:26:33] [V] [TRT] Tactic: -629322288573675003 Time: 0.011344 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -182858804213663094 [10/04/2021-21:26:33] [V] [TRT] Tactic: -182858804213663094 Time: 0.015736 [10/04/2021-21:26:33] [V] [TRT] Fastest Tactic: 5105539492142133503 Time: 0.00868 [10/04/2021-21:26:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 5105539492142133503 [10/04/2021-21:26:33] [V] [TRT] *************** Autotuning Reformat:Int8(2048,256:4,16,1) -> Int8(256,256:32,16,1) *************** [10/04/2021-21:26:33] [V] [TRT] *************** Autotuning Reformat:Int8(256,256:32,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:26:33] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd (CudaDepthwiseConvolution) [10/04/2021-21:26:33] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd (FusedConvActConvolution) [10/04/2021-21:26:33] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd (CaskConvolution) [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_interior_nn_v1 Tactic: 3145259992339075399 [10/04/2021-21:26:33] [V] [TRT] Tactic: 3145259992339075399 Time: 0.010672 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_interior_nn_v1 Tactic: 4000990898022781625 [10/04/2021-21:26:33] [V] [TRT] Tactic: 4000990898022781625 Time: 0.012384 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 4438325421691896755 [10/04/2021-21:26:33] [V] [TRT] Tactic: 4438325421691896755 Time: 0.01038 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: 4581732244273465060 [10/04/2021-21:26:33] [V] [TRT] Tactic: 4581732244273465060 Time: 0.01038 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 4934335053031119367 [10/04/2021-21:26:33] [V] [TRT] Tactic: 4934335053031119367 Time: 0.012208 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6797040896965118050 [10/04/2021-21:26:33] [V] [TRT] Tactic: 6797040896965118050 Time: 0.012228 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 8006952294591770973 [10/04/2021-21:26:33] [V] [TRT] Tactic: 8006952294591770973 Time: 0.010732 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_interior_nn_v1 Tactic: 8097855305881829878 [10/04/2021-21:26:33] [V] [TRT] Tactic: 8097855305881829878 Time: 0.010684 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -7210942453088153035 [10/04/2021-21:26:33] [V] [TRT] Tactic: -7210942453088153035 Time: 0.01224 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: -6282183216199417697 [10/04/2021-21:26:33] [V] [TRT] Tactic: -6282183216199417697 Time: 0.010412 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: -5016725782072253841 [10/04/2021-21:26:33] [V] [TRT] Tactic: -5016725782072253841 Time: 0.011868 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_interior_nn_v1 Tactic: -1543391652455542154 [10/04/2021-21:26:33] [V] [TRT] Tactic: -1543391652455542154 Time: 0.01218 [10/04/2021-21:26:33] [V] [TRT] Fastest Tactic: 4438325421691896755 Time: 0.01038 [10/04/2021-21:26:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 4438325421691896755 [10/04/2021-21:26:33] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd (CaskConvolution) [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_interior_c32_nn_v1 Tactic: 1025026069226666066 [10/04/2021-21:26:33] [V] [TRT] Tactic: 1025026069226666066 Time: 0.012228 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_nn_v1 Tactic: 1213457772632185722 [10/04/2021-21:26:33] [V] [TRT] Tactic: 1213457772632185722 Time: 0.012172 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_nn_v1 Tactic: 1713441381477652893 [10/04/2021-21:26:33] [V] [TRT] Tactic: 1713441381477652893 Time: 0.010424 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_interior_c32_nn_v1 Tactic: 2339361327868109050 [10/04/2021-21:26:33] [V] [TRT] Tactic: 2339361327868109050 Time: 0.010732 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_nn_v1 Tactic: 8047041638267142825 [10/04/2021-21:26:33] [V] [TRT] Tactic: 8047041638267142825 Time: 0.010508 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_nn_v1 Tactic: -7846982807478255793 [10/04/2021-21:26:33] [V] [TRT] Tactic: -7846982807478255793 Time: 0.010484 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_interior_c32_nn_v1 Tactic: -7686150779628967382 [10/04/2021-21:26:33] [V] [TRT] Tactic: -7686150779628967382 Time: 0.0111 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_nn_v1 Tactic: -6459719113600909000 [10/04/2021-21:26:33] [V] [TRT] Tactic: -6459719113600909000 Time: 0.011572 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_nn_v1 Tactic: -4573925292554651334 [10/04/2021-21:26:33] [V] [TRT] Tactic: -4573925292554651334 Time: 0.012276 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_interior_c32_nn_v1 Tactic: -4208188808979933945 [10/04/2021-21:26:33] [V] [TRT] Tactic: -4208188808979933945 Time: 0.010688 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_nn_v1 Tactic: -3566249366964946311 [10/04/2021-21:26:33] [V] [TRT] Tactic: -3566249366964946311 Time: 0.010516 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_nn_v1 Tactic: -2002418013575043687 [10/04/2021-21:26:33] [V] [TRT] Tactic: -2002418013575043687 Time: 0.012188 [10/04/2021-21:26:33] [V] [TRT] Fastest Tactic: 1713441381477652893 Time: 0.010424 [10/04/2021-21:26:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 1713441381477652893 [10/04/2021-21:26:33] [V] [TRT] *************** Autotuning format combination: Int8(256,256:32,16,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd (CudaDepthwiseConvolution) [10/04/2021-21:26:33] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd (FusedConvActConvolution) [10/04/2021-21:26:33] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd (CudaGroupConvolution) [10/04/2021-21:26:33] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd (CaskConvolution) [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 66319348402778770 [10/04/2021-21:26:33] [V] [TRT] Tactic: 66319348402778770 Time: 0.010536 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 2271687430539765460 [10/04/2021-21:26:33] [V] [TRT] Tactic: 2271687430539765460 Time: 0.013928 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16 Tactic: 2284815435928292401 [10/04/2021-21:26:33] [V] [TRT] Tactic: 2284815435928292401 Time: 0.014008 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16_t1r1s1 Tactic: 2403706865230711816 [10/04/2021-21:26:33] [V] [TRT] Tactic: 2403706865230711816 Time: 0.014056 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: 3342635629009683930 [10/04/2021-21:26:33] [V] [TRT] Tactic: 3342635629009683930 Time: 0.006856 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r1s1 Tactic: 4208725185761175800 [10/04/2021-21:26:33] [V] [TRT] Tactic: 4208725185761175800 Time: 0.008608 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_interior_nt_v1 Tactic: 5754467717466343388 [10/04/2021-21:26:33] [V] [TRT] Tactic: 5754467717466343388 Time: 0.010644 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: 6129427510024568065 [10/04/2021-21:26:33] [V] [TRT] Tactic: 6129427510024568065 Time: 0.00692 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r1s1 Tactic: 6866122311506689057 [10/04/2021-21:26:33] [V] [TRT] Tactic: 6866122311506689057 Time: 0.010548 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16_t1r1s1 Tactic: 7003462296689159880 [10/04/2021-21:26:33] [V] [TRT] Tactic: 7003462296689159880 Time: 0.013836 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16 Tactic: 7038629879025143810 [10/04/2021-21:26:33] [V] [TRT] Tactic: 7038629879025143810 Time: 0.01392 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 7039764449991095921 [10/04/2021-21:26:33] [V] [TRT] Tactic: 7039764449991095921 Time: 0.01182 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_interior_nt_v1 Tactic: 7584772692956718645 [10/04/2021-21:26:33] [V] [TRT] Tactic: 7584772692956718645 Time: 0.01418 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -9134855779557081787 [10/04/2021-21:26:33] [V] [TRT] Tactic: -9134855779557081787 Time: 0.008648 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -9114895246540757312 [10/04/2021-21:26:33] [V] [TRT] Tactic: -9114895246540757312 Time: 0.012004 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8945506186161066102 [10/04/2021-21:26:33] [V] [TRT] Tactic: -8945506186161066102 Time: 0.008652 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8707098593641355108 [10/04/2021-21:26:33] [V] [TRT] Tactic: -8707098593641355108 Time: 0.01126 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8343122771093605666 [10/04/2021-21:26:33] [V] [TRT] Tactic: -8343122771093605666 Time: 0.008228 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r1s1 Tactic: -7979528930672358310 [10/04/2021-21:26:33] [V] [TRT] Tactic: -7979528930672358310 Time: 0.006904 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r1s1 Tactic: -7743660625342027105 [10/04/2021-21:26:33] [V] [TRT] Tactic: -7743660625342027105 Time: 0.008748 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -7274936339335021260 [10/04/2021-21:26:33] [V] [TRT] Tactic: -7274936339335021260 Time: 0.012188 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16_t1r1s1 Tactic: -5081515910504910274 [10/04/2021-21:26:33] [V] [TRT] Tactic: -5081515910504910274 Time: 0.010384 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r1s1 Tactic: -4693009430365516309 [10/04/2021-21:26:33] [V] [TRT] Tactic: -4693009430365516309 Time: 0.006936 [10/04/2021-21:26:33] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r1s1 Tactic: -3726322024058434766 [10/04/2021-21:26:33] [V] [TRT] Tactic: -3726322024058434766 Time: 0.006764 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -2102888629196925141 [10/04/2021-21:26:34] [V] [TRT] Tactic: -2102888629196925141 Time: 0.011736 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r1s1 Tactic: -2029608708402886013 [10/04/2021-21:26:34] [V] [TRT] Tactic: -2029608708402886013 Time: 0.008596 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16 Tactic: -1832766392358096151 [10/04/2021-21:26:34] [V] [TRT] Tactic: -1832766392358096151 Time: 0.012188 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_interior_nt_v1 Tactic: -1383447415429797909 [10/04/2021-21:26:34] [V] [TRT] Tactic: -1383447415429797909 Time: 0.011096 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_interior_nt_v1 Tactic: -743032628982127825 [10/04/2021-21:26:34] [V] [TRT] Tactic: -743032628982127825 Time: 0.011128 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -674235064782459186 [10/04/2021-21:26:34] [V] [TRT] Tactic: -674235064782459186 Time: 0.013956 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: -629322288573675003 [10/04/2021-21:26:34] [V] [TRT] Tactic: -629322288573675003 Time: 0.008624 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -182858804213663094 [10/04/2021-21:26:34] [V] [TRT] Tactic: -182858804213663094 Time: 0.010404 [10/04/2021-21:26:34] [V] [TRT] Fastest Tactic: -3726322024058434766 Time: 0.006764 [10/04/2021-21:26:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -3726322024058434766 [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:34] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:34] [V] [TRT] Tactic: 1002 Time: 0.008636 [10/04/2021-21:26:34] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:34] [V] [TRT] Tactic: 0 Time: 0.005088 [10/04/2021-21:26:34] [V] [TRT] Fastest Tactic: 0 Time: 0.005088 [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:34] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:34] [V] [TRT] Tactic: 1002 Time: 0.008684 [10/04/2021-21:26:34] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:34] [V] [TRT] Tactic: 0 Time: 0.00506 [10/04/2021-21:26:34] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:34] [V] [TRT] Tactic: 1 Time: 0.005068 [10/04/2021-21:26:34] [V] [TRT] Fastest Tactic: 0 Time: 0.00506 [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:34] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu (FusedConvActConvolution) [10/04/2021-21:26:34] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu (CaskConvolution) [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 4438325421691896755 [10/04/2021-21:26:34] [V] [TRT] Tactic: 4438325421691896755 Time: 0.036916 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: 4581732244273465060 [10/04/2021-21:26:34] [V] [TRT] Tactic: 4581732244273465060 Time: 0.032044 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 4934335053031119367 [10/04/2021-21:26:34] [V] [TRT] Tactic: 4934335053031119367 Time: 0.03864 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6797040896965118050 [10/04/2021-21:26:34] [V] [TRT] Tactic: 6797040896965118050 Time: 0.049524 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 8006952294591770973 [10/04/2021-21:26:34] [V] [TRT] Tactic: 8006952294591770973 Time: 0.041884 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -7210942453088153035 [10/04/2021-21:26:34] [V] [TRT] Tactic: -7210942453088153035 Time: 0.049168 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: -6282183216199417697 [10/04/2021-21:26:34] [V] [TRT] Tactic: -6282183216199417697 Time: 0.034848 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_nn_v1 Tactic: -5026383765466876607 [10/04/2021-21:26:34] [V] [TRT] Tactic: -5026383765466876607 Time: 0.051008 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: -5016725782072253841 [10/04/2021-21:26:34] [V] [TRT] Tactic: -5016725782072253841 Time: 0.033096 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_nn_v1 Tactic: -1370999262391786833 [10/04/2021-21:26:34] [V] [TRT] Tactic: -1370999262391786833 Time: 0.035532 [10/04/2021-21:26:34] [V] [TRT] Fastest Tactic: 4581732244273465060 Time: 0.032044 [10/04/2021-21:26:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 4581732244273465060 [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu (CaskConvolution) [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_nn_v1 Tactic: 1213457772632185722 [10/04/2021-21:26:34] [V] [TRT] Tactic: 1213457772632185722 Time: 0.038676 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_nn_v1 Tactic: 1713441381477652893 [10/04/2021-21:26:34] [V] [TRT] Tactic: 1713441381477652893 Time: 0.040444 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_c32_nn_v1 Tactic: 7125598890155666458 [10/04/2021-21:26:34] [V] [TRT] Tactic: 7125598890155666458 Time: 0.035464 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_nn_v1 Tactic: 8047041638267142825 [10/04/2021-21:26:34] [V] [TRT] Tactic: 8047041638267142825 Time: 0.033604 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_nn_v1 Tactic: -7846982807478255793 [10/04/2021-21:26:34] [V] [TRT] Tactic: -7846982807478255793 Time: 0.031572 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_nn_v1 Tactic: -6459719113600909000 [10/04/2021-21:26:34] [V] [TRT] Tactic: -6459719113600909000 Time: 0.032328 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_nn_v1 Tactic: -4573925292554651334 [10/04/2021-21:26:34] [V] [TRT] Tactic: -4573925292554651334 Time: 0.047636 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_nn_v1 Tactic: -3566249366964946311 [10/04/2021-21:26:34] [V] [TRT] Tactic: -3566249366964946311 Time: 0.036444 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_nn_v1 Tactic: -2002418013575043687 [10/04/2021-21:26:34] [V] [TRT] Tactic: -2002418013575043687 Time: 0.049256 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_c32_nn_v1 Tactic: -1659631603542281459 [10/04/2021-21:26:34] [V] [TRT] Tactic: -1659631603542281459 Time: 0.049692 [10/04/2021-21:26:34] [V] [TRT] Fastest Tactic: -7846982807478255793 Time: 0.031572 [10/04/2021-21:26:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7846982807478255793 [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:34] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu (FusedConvActConvolution) [10/04/2021-21:26:34] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu (CudaGroupConvolution) [10/04/2021-21:26:34] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu (CaskConvolution) [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 66319348402778770 [10/04/2021-21:26:34] [V] [TRT] Tactic: 66319348402778770 Time: 0.022812 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_large_nt_v1 Tactic: 1931698692231796048 [10/04/2021-21:26:34] [V] [TRT] Tactic: 1931698692231796048 Time: 0.024908 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2004366221877065623 [10/04/2021-21:26:34] [V] [TRT] Tactic: 2004366221877065623 Time: 0.012184 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2169338034361422162 [10/04/2021-21:26:34] [V] [TRT] Tactic: 2169338034361422162 Time: 0.015716 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 2271687430539765460 [10/04/2021-21:26:34] [V] [TRT] Tactic: 2271687430539765460 Time: 0.033268 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16 Tactic: 2284815435928292401 [10/04/2021-21:26:34] [V] [TRT] Tactic: 2284815435928292401 Time: 0.031624 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: 3342635629009683930 [10/04/2021-21:26:34] [V] [TRT] Tactic: 3342635629009683930 Time: 0.012128 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3740557186499054067 [10/04/2021-21:26:34] [V] [TRT] Tactic: 3740557186499054067 Time: 0.022692 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:34] [V] [TRT] Tactic: 3768633326807889446 Time: 0.010408 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:34] [V] [TRT] Tactic: 5105539492142133503 Time: 0.010356 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: 6129427510024568065 [10/04/2021-21:26:34] [V] [TRT] Tactic: 6129427510024568065 Time: 0.012048 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16 Tactic: 7038629879025143810 [10/04/2021-21:26:34] [V] [TRT] Tactic: 7038629879025143810 Time: 0.03162 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 7039764449991095921 [10/04/2021-21:26:34] [V] [TRT] Tactic: 7039764449991095921 Time: 0.022864 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:34] [V] [TRT] Tactic: 7585914864117166414 Time: 0.01226 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -9134855779557081787 [10/04/2021-21:26:34] [V] [TRT] Tactic: -9134855779557081787 Time: 0.01752 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -9114895246540757312 [10/04/2021-21:26:34] [V] [TRT] Tactic: -9114895246540757312 Time: 0.022776 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8945506186161066102 [10/04/2021-21:26:34] [V] [TRT] Tactic: -8945506186161066102 Time: 0.01678 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_large_nt_v1 Tactic: -8787970778927801941 [10/04/2021-21:26:34] [V] [TRT] Tactic: -8787970778927801941 Time: 0.023484 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8707098593641355108 [10/04/2021-21:26:34] [V] [TRT] Tactic: -8707098593641355108 Time: 0.024504 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8343122771093605666 [10/04/2021-21:26:34] [V] [TRT] Tactic: -8343122771093605666 Time: 0.013408 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_large_nt_v1 Tactic: -8225786209923559953 [10/04/2021-21:26:34] [V] [TRT] Tactic: -8225786209923559953 Time: 0.033704 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_large_nt_v1 Tactic: -7373087278866484214 [10/04/2021-21:26:34] [V] [TRT] Tactic: -7373087278866484214 Time: 0.023176 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -7274936339335021260 [10/04/2021-21:26:34] [V] [TRT] Tactic: -7274936339335021260 Time: 0.024536 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16_t1r3s3 Tactic: -6068501086087743547 [10/04/2021-21:26:34] [V] [TRT] Tactic: -6068501086087743547 Time: 0.030904 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -5693797309970869451 [10/04/2021-21:26:34] [V] [TRT] Tactic: -5693797309970869451 Time: 0.015768 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -3036929044958869524 [10/04/2021-21:26:34] [V] [TRT] Tactic: -3036929044958869524 Time: 0.022712 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -2102888629196925141 [10/04/2021-21:26:34] [V] [TRT] Tactic: -2102888629196925141 Time: 0.022544 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16 Tactic: -1832766392358096151 [10/04/2021-21:26:34] [V] [TRT] Tactic: -1832766392358096151 Time: 0.026348 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -1467400415054408443 [10/04/2021-21:26:34] [V] [TRT] Tactic: -1467400415054408443 Time: 0.031244 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -674235064782459186 [10/04/2021-21:26:34] [V] [TRT] Tactic: -674235064782459186 Time: 0.033308 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: -629322288573675003 [10/04/2021-21:26:34] [V] [TRT] Tactic: -629322288573675003 Time: 0.014024 [10/04/2021-21:26:34] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -182858804213663094 [10/04/2021-21:26:34] [V] [TRT] Tactic: -182858804213663094 Time: 0.021056 [10/04/2021-21:26:34] [V] [TRT] Fastest Tactic: 5105539492142133503 Time: 0.010356 [10/04/2021-21:26:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 5105539492142133503 [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:34] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:34] [V] [TRT] Tactic: 1002 Time: 0.00862 [10/04/2021-21:26:34] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:34] [V] [TRT] Tactic: 0 Time: 0.005084 [10/04/2021-21:26:34] [V] [TRT] Fastest Tactic: 0 Time: 0.005084 [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:34] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:34] [V] [TRT] Tactic: 1002 Time: 0.008664 [10/04/2021-21:26:34] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:34] [V] [TRT] Tactic: 0 Time: 0.005156 [10/04/2021-21:26:34] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:34] [V] [TRT] Tactic: 1 Time: 0.005092 [10/04/2021-21:26:34] [V] [TRT] Fastest Tactic: 1 Time: 0.005092 [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:34] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu (FusedConvActConvolution) [10/04/2021-21:26:34] [V] [TRT] Tactic: 524287 Time: 0.02274 [10/04/2021-21:26:34] [V] [TRT] Tactic: 720895 Time: 0.020652 [10/04/2021-21:26:34] [V] [TRT] Tactic: 983039 Time: 0.01394 [10/04/2021-21:26:34] [V] [TRT] Tactic: 1048575 Time: 0.017728 [10/04/2021-21:26:34] [V] [TRT] Tactic: 1703935 Time: 0.012152 [10/04/2021-21:26:34] [V] [TRT] Tactic: 1769471 Time: 0.013796 [10/04/2021-21:26:34] [V] [TRT] Tactic: 1966079 Time: 0.034952 [10/04/2021-21:26:34] [V] [TRT] Tactic: 2031615 Time: 0.029828 [10/04/2021-21:26:34] [V] [TRT] Tactic: 2228223 Time: 0.024508 [10/04/2021-21:26:34] [V] [TRT] Tactic: 2424831 Time: 0.01308 [10/04/2021-21:26:34] [V] [TRT] Tactic: 2621439 Time: 0.012104 [10/04/2021-21:26:34] [V] [TRT] Tactic: 2752511 Time: 0.022852 [10/04/2021-21:26:34] [V] [TRT] Tactic: 2818047 Time: 0.027708 [10/04/2021-21:26:34] [V] [TRT] Tactic: 2883583 Time: 0.04048 [10/04/2021-21:26:34] [V] [TRT] Tactic: 3014655 Time: 0.015116 [10/04/2021-21:26:34] [V] [TRT] Tactic: 3145727 Time: 0.01534 [10/04/2021-21:26:34] [V] [TRT] Tactic: 3473407 Time: 0.024284 [10/04/2021-21:26:35] [V] [TRT] Tactic: 3604479 Time: 0.015092 [10/04/2021-21:26:35] [V] [TRT] Tactic: 3735551 Time: 0.015632 [10/04/2021-21:26:35] [V] [TRT] Tactic: 4390911 Time: 0.036944 [10/04/2021-21:26:35] [V] [TRT] Tactic: 5046271 Time: 0.015652 [10/04/2021-21:26:35] [V] [TRT] Tactic: 5963775 Time: 0.031576 [10/04/2021-21:26:35] [V] [TRT] Tactic: 6160383 Time: 0.0263 [10/04/2021-21:26:35] [V] [TRT] Tactic: 6488063 Time: 0.021044 [10/04/2021-21:26:35] [V] [TRT] Tactic: 6881279 Time: 0.026284 [10/04/2021-21:26:35] [V] [TRT] Tactic: 7274495 Time: 0.010424 [10/04/2021-21:26:35] [V] [TRT] Tactic: 7864319 Time: 0.012248 [10/04/2021-21:26:35] [V] [TRT] Tactic: 7995391 Time: 0.02096 [10/04/2021-21:26:35] [V] [TRT] Tactic: 8585215 Time: 0.026312 [10/04/2021-21:26:35] [V] [TRT] Tactic: 8847359 Time: 0.012132 [10/04/2021-21:26:35] [V] [TRT] Tactic: 8978431 Time: 0.031512 [10/04/2021-21:26:35] [V] [TRT] Tactic: 9043967 Time: 0.01392 [10/04/2021-21:26:35] [V] [TRT] Tactic: 9175039 Time: 0.015292 [10/04/2021-21:26:35] [V] [TRT] Tactic: 9502719 Time: 0.036912 [10/04/2021-21:26:35] [V] [TRT] Tactic: 9830399 Time: 0.022568 [10/04/2021-21:26:35] [V] [TRT] Tactic: 9961471 Time: 0.013948 [10/04/2021-21:26:35] [V] [TRT] Tactic: 10027007 Time: 0.017168 [10/04/2021-21:26:35] [V] [TRT] Tactic: 10092543 Time: 0.036912 [10/04/2021-21:26:35] [V] [TRT] Tactic: 10289151 Time: 0.03494 [10/04/2021-21:26:35] [V] [TRT] Tactic: 10485759 Time: 0.012184 [10/04/2021-21:26:35] [V] [TRT] Tactic: 10682367 Time: 0.011704 [10/04/2021-21:26:35] [V] [TRT] Tactic: 10813439 Time: 0.013948 [10/04/2021-21:26:35] [V] [TRT] Fastest Tactic: 7274495 Time: 0.010424 [10/04/2021-21:26:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu (CaskConvolution) [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 4438325421691896755 [10/04/2021-21:26:35] [V] [TRT] Tactic: 4438325421691896755 Time: 0.033384 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: 4581732244273465060 [10/04/2021-21:26:35] [V] [TRT] Tactic: 4581732244273465060 Time: 0.029828 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 4934335053031119367 [10/04/2021-21:26:35] [V] [TRT] Tactic: 4934335053031119367 Time: 0.036708 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6797040896965118050 [10/04/2021-21:26:35] [V] [TRT] Tactic: 6797040896965118050 Time: 0.047608 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 8006952294591770973 [10/04/2021-21:26:35] [V] [TRT] Tactic: 8006952294591770973 Time: 0.03866 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -7210942453088153035 [10/04/2021-21:26:35] [V] [TRT] Tactic: -7210942453088153035 Time: 0.045744 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: -6282183216199417697 [10/04/2021-21:26:35] [V] [TRT] Tactic: -6282183216199417697 Time: 0.031624 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_nn_v1 Tactic: -5026383765466876607 [10/04/2021-21:26:35] [V] [TRT] Tactic: -5026383765466876607 Time: 0.047956 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: -5016725782072253841 [10/04/2021-21:26:35] [V] [TRT] Tactic: -5016725782072253841 Time: 0.030904 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_nn_v1 Tactic: -1370999262391786833 [10/04/2021-21:26:35] [V] [TRT] Tactic: -1370999262391786833 Time: 0.033692 [10/04/2021-21:26:35] [V] [TRT] Fastest Tactic: 4581732244273465060 Time: 0.029828 [10/04/2021-21:26:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: FusedConvActConvolution Tactic: 7274495 [10/04/2021-21:26:35] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu (CaskConvolution) [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_medium_c32_nn_v1 Tactic: 1213457772632185722 [10/04/2021-21:26:35] [V] [TRT] Tactic: 1213457772632185722 Time: 0.035812 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_medium_c32_nn_v1 Tactic: 1713441381477652893 [10/04/2021-21:26:35] [V] [TRT] Tactic: 1713441381477652893 Time: 0.038692 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_xregs_large_c32_nn_v1 Tactic: 7125598890155666458 [10/04/2021-21:26:35] [V] [TRT] Tactic: 7125598890155666458 Time: 0.033736 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_small_c32_nn_v1 Tactic: 8047041638267142825 [10/04/2021-21:26:35] [V] [TRT] Tactic: 8047041638267142825 Time: 0.031664 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_small_c32_nn_v1 Tactic: -7846982807478255793 [10/04/2021-21:26:35] [V] [TRT] Tactic: -7846982807478255793 Time: 0.029872 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x32_relu_xregs_small_c32_nn_v1 Tactic: -6459719113600909000 [10/04/2021-21:26:35] [V] [TRT] Tactic: -6459719113600909000 Time: 0.030232 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_small_c32_nn_v1 Tactic: -4573925292554651334 [10/04/2021-21:26:35] [V] [TRT] Tactic: -4573925292554651334 Time: 0.045708 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x64_relu_medium_c32_nn_v1 Tactic: -3566249366964946311 [10/04/2021-21:26:35] [V] [TRT] Tactic: -3566249366964946311 Time: 0.033412 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_medium_c32_nn_v1 Tactic: -2002418013575043687 [10/04/2021-21:26:35] [V] [TRT] Tactic: -2002418013575043687 Time: 0.047608 [10/04/2021-21:26:35] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8x4_icudnn_int8x4_128x128_relu_xregs_large_c32_nn_v1 Tactic: -1659631603542281459 [10/04/2021-21:26:35] [V] [TRT] Tactic: -1659631603542281459 Time: 0.047916 [10/04/2021-21:26:35] [V] [TRT] Fastest Tactic: -7846982807478255793 Time: 0.029872 [10/04/2021-21:26:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7846982807478255793 [10/04/2021-21:26:35] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:35] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu (FusedConvActConvolution) [10/04/2021-21:26:35] [V] [TRT] Tactic: 524287 Time: 0.022648 [10/04/2021-21:26:35] [V] [TRT] Tactic: 720895 Time: 0.020804 [10/04/2021-21:26:35] [V] [TRT] Tactic: 983039 Time: 0.013968 [10/04/2021-21:26:35] [V] [TRT] Tactic: 1048575 Time: 0.017512 [10/04/2021-21:26:35] [V] [TRT] Tactic: 1703935 Time: 0.012224 [10/04/2021-21:26:35] [V] [TRT] Tactic: 1769471 Time: 0.013972 [10/04/2021-21:26:35] [V] [TRT] Tactic: 1966079 Time: 0.034584 [10/04/2021-21:26:35] [V] [TRT] Tactic: 2031615 Time: 0.029504 [10/04/2021-21:26:35] [V] [TRT] Tactic: 2228223 Time: 0.026228 [10/04/2021-21:26:35] [V] [TRT] Tactic: 2424831 Time: 0.01392 [10/04/2021-21:26:35] [V] [TRT] Tactic: 2621439 Time: 0.011548 [10/04/2021-21:26:35] [V] [TRT] Tactic: 2752511 Time: 0.02276 [10/04/2021-21:26:35] [V] [TRT] Tactic: 2818047 Time: 0.027844 [10/04/2021-21:26:35] [V] [TRT] Tactic: 2883583 Time: 0.041008 [10/04/2021-21:26:35] [V] [TRT] Tactic: 3014655 Time: 0.01428 [10/04/2021-21:26:35] [V] [TRT] Tactic: 3145727 Time: 0.015372 [10/04/2021-21:26:35] [V] [TRT] Tactic: 3473407 Time: 0.024592 [10/04/2021-21:26:35] [V] [TRT] Tactic: 3604479 Time: 0.015272 [10/04/2021-21:26:35] [V] [TRT] Tactic: 3735551 Time: 0.015536 [10/04/2021-21:26:35] [V] [TRT] Tactic: 4390911 Time: 0.036856 [10/04/2021-21:26:35] [V] [TRT] Tactic: 5046271 Time: 0.01536 [10/04/2021-21:26:35] [V] [TRT] Tactic: 5963775 Time: 0.031584 [10/04/2021-21:26:35] [V] [TRT] Tactic: 6160383 Time: 0.027992 [10/04/2021-21:26:35] [V] [TRT] Tactic: 6488063 Time: 0.020912 [10/04/2021-21:26:35] [V] [TRT] Tactic: 6881279 Time: 0.026252 [10/04/2021-21:26:35] [V] [TRT] Tactic: 7274495 Time: 0.010428 [10/04/2021-21:26:35] [V] [TRT] Tactic: 7864319 Time: 0.012148 [10/04/2021-21:26:35] [V] [TRT] Tactic: 7995391 Time: 0.021 [10/04/2021-21:26:35] [V] [TRT] Tactic: 8585215 Time: 0.026148 [10/04/2021-21:26:35] [V] [TRT] Tactic: 8847359 Time: 0.012168 [10/04/2021-21:26:35] [V] [TRT] Tactic: 8978431 Time: 0.031944 [10/04/2021-21:26:35] [V] [TRT] Tactic: 9043967 Time: 0.013896 [10/04/2021-21:26:36] [V] [TRT] Tactic: 9175039 Time: 0.015096 [10/04/2021-21:26:36] [V] [TRT] Tactic: 9502719 Time: 0.036764 [10/04/2021-21:26:36] [V] [TRT] Tactic: 9830399 Time: 0.022672 [10/04/2021-21:26:36] [V] [TRT] Tactic: 9961471 Time: 0.013948 [10/04/2021-21:26:36] [V] [TRT] Tactic: 10027007 Time: 0.017244 [10/04/2021-21:26:36] [V] [TRT] Tactic: 10092543 Time: 0.036876 [10/04/2021-21:26:36] [V] [TRT] Tactic: 10289151 Time: 0.034524 [10/04/2021-21:26:36] [V] [TRT] Tactic: 10485759 Time: 0.012128 [10/04/2021-21:26:36] [V] [TRT] Tactic: 10682367 Time: 0.010492 [10/04/2021-21:26:36] [V] [TRT] Tactic: 10813439 Time: 0.013824 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 7274495 Time: 0.010428 [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu (CudaGroupConvolution) [10/04/2021-21:26:36] [V] [TRT] CudaGroupConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu (CaskConvolution) [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 66319348402778770 [10/04/2021-21:26:36] [V] [TRT] Tactic: 66319348402778770 Time: 0.02114 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_large_nt_v1 Tactic: 1931698692231796048 [10/04/2021-21:26:36] [V] [TRT] Tactic: 1931698692231796048 Time: 0.023216 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2004366221877065623 [10/04/2021-21:26:36] [V] [TRT] Tactic: 2004366221877065623 Time: 0.011608 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 2169338034361422162 [10/04/2021-21:26:36] [V] [TRT] Tactic: 2169338034361422162 Time: 0.0156 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 2271687430539765460 [10/04/2021-21:26:36] [V] [TRT] Tactic: 2271687430539765460 Time: 0.030744 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16 Tactic: 2284815435928292401 [10/04/2021-21:26:36] [V] [TRT] Tactic: 2284815435928292401 Time: 0.031484 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: 3342635629009683930 [10/04/2021-21:26:36] [V] [TRT] Tactic: 3342635629009683930 Time: 0.012236 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3740557186499054067 [10/04/2021-21:26:36] [V] [TRT] Tactic: 3740557186499054067 Time: 0.020992 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:36] [V] [TRT] Tactic: 3768633326807889446 Time: 0.010424 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:36] [V] [TRT] Tactic: 5105539492142133503 Time: 0.010424 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: 6129427510024568065 [10/04/2021-21:26:36] [V] [TRT] Tactic: 6129427510024568065 Time: 0.011632 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16 Tactic: 7038629879025143810 [10/04/2021-21:26:36] [V] [TRT] Tactic: 7038629879025143810 Time: 0.030868 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: 7039764449991095921 [10/04/2021-21:26:36] [V] [TRT] Tactic: 7039764449991095921 Time: 0.021136 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:36] [V] [TRT] Tactic: 7585914864117166414 Time: 0.012216 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -9134855779557081787 [10/04/2021-21:26:36] [V] [TRT] Tactic: -9134855779557081787 Time: 0.01576 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -9114895246540757312 [10/04/2021-21:26:36] [V] [TRT] Tactic: -9114895246540757312 Time: 0.021028 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8945506186161066102 [10/04/2021-21:26:36] [V] [TRT] Tactic: -8945506186161066102 Time: 0.015784 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_large_nt_v1 Tactic: -8787970778927801941 [10/04/2021-21:26:36] [V] [TRT] Tactic: -8787970778927801941 Time: 0.021936 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8707098593641355108 [10/04/2021-21:26:36] [V] [TRT] Tactic: -8707098593641355108 Time: 0.0227 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16 Tactic: -8343122771093605666 [10/04/2021-21:26:36] [V] [TRT] Tactic: -8343122771093605666 Time: 0.012192 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_large_nt_v1 Tactic: -8225786209923559953 [10/04/2021-21:26:36] [V] [TRT] Tactic: -8225786209923559953 Time: 0.031148 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_large_nt_v1 Tactic: -7373087278866484214 [10/04/2021-21:26:36] [V] [TRT] Tactic: -7373087278866484214 Time: 0.021296 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -7274936339335021260 [10/04/2021-21:26:36] [V] [TRT] Tactic: -7274936339335021260 Time: 0.022868 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x256x64_stage1_warpsize2x4x1_g1_tensor8x8x16_t1r3s3 Tactic: -6068501086087743547 [10/04/2021-21:26:36] [V] [TRT] Tactic: -6068501086087743547 Time: 0.029808 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -5693797309970869451 [10/04/2021-21:26:36] [V] [TRT] Tactic: -5693797309970869451 Time: 0.015696 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize128x128x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -3036929044958869524 [10/04/2021-21:26:36] [V] [TRT] Tactic: -3036929044958869524 Time: 0.021028 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -2102888629196925141 [10/04/2021-21:26:36] [V] [TRT] Tactic: -2102888629196925141 Time: 0.020588 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x64x64_stage1_warpsize4x1x1_g1_tensor8x8x16 Tactic: -1832766392358096151 [10/04/2021-21:26:36] [V] [TRT] Tactic: -1832766392358096151 Time: 0.024668 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize256x128x64_stage1_warpsize4x2x1_g1_tensor8x8x16_t1r3s3 Tactic: -1467400415054408443 [10/04/2021-21:26:36] [V] [TRT] Tactic: -1467400415054408443 Time: 0.029832 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -674235064782459186 [10/04/2021-21:26:36] [V] [TRT] Tactic: -674235064782459186 Time: 0.029836 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16 Tactic: -629322288573675003 [10/04/2021-21:26:36] [V] [TRT] Tactic: -629322288573675003 Time: 0.013868 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: volta_int8_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -182858804213663094 [10/04/2021-21:26:36] [V] [TRT] Tactic: -182858804213663094 Time: 0.01988 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 5105539492142133503 Time: 0.010424 [10/04/2021-21:26:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 5105539492142133503 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1), Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(4096,64,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 1002 Time: 0.0122 [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 0 Time: 0.006812 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 0 Time: 0.006812 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(4096,64,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 1002 Time: 0.010436 [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 0 Time: 0.00686 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 0 Time: 0.00686 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(4096,64,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 1002 Time: 0.010752 [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 0 Time: 0.005072 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 0 Time: 0.005072 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(4096,64,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 1002 Time: 0.008656 [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 0 Time: 0.006692 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 0 Time: 0.006692 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(4096,64,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 1002 Time: 0.011952 [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 0 Time: 0.00678 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 0 Time: 0.00678 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(4096,64,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 1002 Time: 0.01044 [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 0 Time: 0.006836 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 0 Time: 0.006836 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(4096,64,8,1) -> Float(4096,64,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: DequantLinearNode__1173_quantize_scale_node (Scale) [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for DequantLinearNode__1173_quantize_scale_node [10/04/2021-21:26:36] [V] [TRT] Tactic: 0 Time: 0.008448 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 0 Time: 0.008448 [10/04/2021-21:26:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Float(4096,64,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: DequantLinearNode__1173_quantize_scale_node (Scale) [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for DequantLinearNode__1173_quantize_scale_node [10/04/2021-21:26:36] [V] [TRT] Tactic: 0 Time: 0.008604 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 0 Time: 0.008604 [10/04/2021-21:26:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1) -> Float(4096,64,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: DequantLinearNode__1173_quantize_scale_node (Scale) [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for DequantLinearNode__1173_quantize_scale_node [10/04/2021-21:26:36] [V] [TRT] Tactic: 0 Time: 0.008652 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 0 Time: 0.008652 [10/04/2021-21:26:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(4096,64,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(4096,64,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Int8(128,64:32,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 1002 Time: 0.011868 [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 0 Time: 0.007284 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 0 Time: 0.007284 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning Reformat:Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 1002 Time: 0.012188 [10/04/2021-21:26:36] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:36] [V] [TRT] Tactic: 0 Time: 0.006652 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 0 Time: 0.006652 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Float(4096,64,8,1) -> Float(4096,64,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu (CudaDepthwiseConvolution) [10/04/2021-21:26:36] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu (CaskConvolution) [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 892787096507693407 [10/04/2021-21:26:36] [V] [TRT] Tactic: 892787096507693407 Time: 0.052848 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_icudnn_int8x4_128x32_relu_xregs_medium_nn_v1 Tactic: 1204440019753223942 [10/04/2021-21:26:36] [V] [TRT] Tactic: 1204440019753223942 Time: 0.040544 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 1659301557717208403 [10/04/2021-21:26:36] [V] [TRT] Tactic: 1659301557717208403 Time: 0.036944 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 2057291331119027912 [10/04/2021-21:26:36] [V] [TRT] Tactic: 2057291331119027912 Time: 0.038744 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_icudnn_int8x4_128x32_relu_xregs_small_nn_v1 Tactic: 3275977259705528576 [10/04/2021-21:26:36] [V] [TRT] Tactic: 3275977259705528576 Time: 0.035108 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 5623454780463195174 [10/04/2021-21:26:36] [V] [TRT] Tactic: 5623454780463195174 Time: 0.044084 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_icudnn_int8x4_128x64_relu_xregs_large_nn_v1 Tactic: 8930254200803946944 [10/04/2021-21:26:36] [V] [TRT] Tactic: 8930254200803946944 Time: 0.038344 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -9204333525109552344 [10/04/2021-21:26:36] [V] [TRT] Tactic: -9204333525109552344 Time: 0.03518 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -4973811344878172338 [10/04/2021-21:26:36] [V] [TRT] Tactic: -4973811344878172338 Time: 0.050992 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_icudnn_int8x4_128x128_relu_xregs_large_nn_v1 Tactic: -1228371230285617088 [10/04/2021-21:26:36] [V] [TRT] Tactic: -1228371230285617088 Time: 0.053128 [10/04/2021-21:26:36] [V] [TRT] Fastest Tactic: 3275977259705528576 Time: 0.035108 [10/04/2021-21:26:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 3275977259705528576 [10/04/2021-21:26:36] [V] [TRT] *************** Autotuning format combination: Int8(128,64:32,8,1), Float(128,64:32,8,1) -> Float(128,64:32,8,1) *************** [10/04/2021-21:26:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu (CaskConvolution) [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_256x128_ldg16_relu_medium_nt_v1 Tactic: 7785217228143857868 [10/04/2021-21:26:36] [V] [TRT] Tactic: 7785217228143857868 Time: 0.030156 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_128x128_ldg16_relu_medium_nt_v1 Tactic: 8315790488934712458 [10/04/2021-21:26:36] [V] [TRT] Tactic: 8315790488934712458 Time: 0.022444 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_large_nt_v1 Tactic: -8165074686865110847 [10/04/2021-21:26:36] [V] [TRT] Tactic: -8165074686865110847 Time: 0.023164 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_medium_nt_v1 Tactic: -7185527339793611699 [10/04/2021-21:26:36] [V] [TRT] Tactic: -7185527339793611699 Time: 0.02196 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_medium_nt_v1 Tactic: -5979101256828290173 [10/04/2021-21:26:36] [V] [TRT] Tactic: -5979101256828290173 Time: 0.022788 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_128x128_ldg16_relu_large_nt_v1 Tactic: -5763174003249488100 [10/04/2021-21:26:36] [V] [TRT] Tactic: -5763174003249488100 Time: 0.023116 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_large_nt_v1 Tactic: -5612459945002849429 [10/04/2021-21:26:36] [V] [TRT] Tactic: -5612459945002849429 Time: 0.02316 [10/04/2021-21:26:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_256x128_ldg16_relu_large_nt_v1 Tactic: -4911193113143178408 [10/04/2021-21:26:37] [V] [TRT] Tactic: -4911193113143178408 Time: 0.03052 [10/04/2021-21:26:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_256x128_ldg16_relu_small_nt_v1 Tactic: -4563432698383308679 [10/04/2021-21:26:37] [V] [TRT] Tactic: -4563432698383308679 Time: 0.029828 [10/04/2021-21:26:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_small_nt_v1 Tactic: -3936136542475827126 [10/04/2021-21:26:37] [V] [TRT] Tactic: -3936136542475827126 Time: 0.021212 [10/04/2021-21:26:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_128x128_ldg16_relu_small_nt_v1 Tactic: -3784829056659735491 [10/04/2021-21:26:37] [V] [TRT] Tactic: -3784829056659735491 Time: 0.021076 [10/04/2021-21:26:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -733152064595858464 [10/04/2021-21:26:37] [V] [TRT] Tactic: -733152064595858464 Time: 0.021064 [10/04/2021-21:26:37] [V] [TRT] Fastest Tactic: -733152064595858464 Time: 0.021064 [10/04/2021-21:26:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -733152064595858464 [10/04/2021-21:26:37] [V] [TRT] *************** Autotuning Reformat:Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [10/04/2021-21:26:37] [V] [TRT] *************** Autotuning format combination: Float(4096,64,8,1) -> Float(64,1,1,1) *************** [10/04/2021-21:26:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/global_average_pooling2d/Mean (TiledPooling) [10/04/2021-21:26:37] [V] [TRT] TiledPooling has no valid tactics for this config, skipping [10/04/2021-21:26:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/global_average_pooling2d/Mean (CudnnPooling) [10/04/2021-21:26:37] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/global_average_pooling2d/Mean [10/04/2021-21:26:37] [V] [TRT] Tactic: -1 Time: 0.00868 [10/04/2021-21:26:37] [V] [TRT] Fastest Tactic: -1 Time: 0.00868 [10/04/2021-21:26:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CudnnPooling Tactic: -1 [10/04/2021-21:26:37] [V] [TRT] *************** Autotuning Reformat:Float(64,1,1,1) -> Float(64,1,64,64) *************** [10/04/2021-21:26:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:37] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:37] [V] [TRT] Tactic: 1002 Time: 0.008592 [10/04/2021-21:26:37] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:37] [V] [TRT] Tactic: 0 Time: 0.005096 [10/04/2021-21:26:37] [V] [TRT] Fastest Tactic: 0 Time: 0.005096 [10/04/2021-21:26:37] [V] [TRT] *************** Autotuning Reformat:Float(64,1,1,1) -> Float(64,1,64,64) *************** [10/04/2021-21:26:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:37] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:37] [V] [TRT] Tactic: 1002 Time: 0.008616 [10/04/2021-21:26:37] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:37] [V] [TRT] Tactic: 0 Time: 0.005132 [10/04/2021-21:26:37] [V] [TRT] Fastest Tactic: 0 Time: 0.005132 [10/04/2021-21:26:37] [V] [TRT] *************** Autotuning Reformat:Float(64,1,64,64) -> Float(64,1,1,1) *************** [10/04/2021-21:26:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:37] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:37] [V] [TRT] Tactic: 1002 Time: 0.008572 [10/04/2021-21:26:37] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:37] [V] [TRT] Tactic: 0 Time: 0.0051 [10/04/2021-21:26:37] [V] [TRT] Fastest Tactic: 0 Time: 0.0051 [10/04/2021-21:26:37] [V] [TRT] *************** Autotuning format combination: Float(64,1,1,1) -> Float(10,1,1,1) *************** [10/04/2021-21:26:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd (CudaDepthwiseConvolution) [10/04/2021-21:26:37] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd (FusedConvActConvolution) [10/04/2021-21:26:37] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd (CudnnConvolution) [10/04/2021-21:26:37] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 0 Time: 0.017012 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 1 Time: 0.016924 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 2 Time: 0.04276 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 4 Time: 0.063608 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 5 Time: 0.034828 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 56 Time: 0.016716 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 57 Time: 0.016772 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 58 Time: 0.042284 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 60 Time: 0.063796 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 61 Time: 0.035252 [10/04/2021-21:26:38] [V] [TRT] Fastest Tactic: 56 Time: 0.016716 [10/04/2021-21:26:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd (CublasConvolution) [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 0 Time: 0.009656 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 1 Time: 0.009568 [10/04/2021-21:26:38] [V] [TRT] Fastest Tactic: 1 Time: 0.009568 [10/04/2021-21:26:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd (CaskConvolution) [10/04/2021-21:26:38] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 Tactic: 1754569683116234317 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 1754569683116234317 Time: 0.028056 [10/04/2021-21:26:38] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 Tactic: 1825138533642645384 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 1825138533642645384 Time: 0.029148 [10/04/2021-21:26:38] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 Tactic: 2733356012094739613 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 2733356012094739613 Time: 0.0211 [10/04/2021-21:26:38] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 Tactic: 3915320020053085238 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 3915320020053085238 Time: 0.028884 [10/04/2021-21:26:38] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 Tactic: 6808617066150061604 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 6808617066150061604 Time: 0.020756 [10/04/2021-21:26:38] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 Tactic: 9091006216302412844 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: 9091006216302412844 Time: 0.021064 [10/04/2021-21:26:38] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 Tactic: -8060443123034038864 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: -8060443123034038864 Time: 0.021452 [10/04/2021-21:26:38] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: sm70_xmma_fprop_conv1x1_f32f32_f32_f32_nchwkcrs_nchw_simt_small_batch_bias_relu Tactic: -6194327789991425125 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: -6194327789991425125 Time: 0.006076 [10/04/2021-21:26:38] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 Tactic: -4420849921117327522 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: -4420849921117327522 Time: 0.015672 [10/04/2021-21:26:38] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 Tactic: -3946921629105938337 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:38] [V] [TRT] Tactic: -3946921629105938337 Time: 0.020948 [10/04/2021-21:26:38] [V] [TRT] Fastest Tactic: -6194327789991425125 Time: 0.006076 [10/04/2021-21:26:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6194327789991425125 [10/04/2021-21:26:38] [V] [TRT] *************** Autotuning format combination: Float(64,1,64,64) -> Float(10,1,10,10) *************** [10/04/2021-21:26:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd (CudnnConvolution) [10/04/2021-21:26:38] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd (CublasConvolution) [10/04/2021-21:26:38] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd (CaskConvolution) [10/04/2021-21:26:38] [V] [TRT] CaskConvolution has no valid tactics for this config, skipping [10/04/2021-21:26:38] [V] [TRT] *************** Autotuning Reformat:Float(10,1,10,10) -> Float(10,1,1,1) *************** [10/04/2021-21:26:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:38] [V] [TRT] Tactic: 1002 Time: 0.0084 [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:26:38] [V] [TRT] Tactic: 0 Time: 0.005096 [10/04/2021-21:26:38] [V] [TRT] Fastest Tactic: 0 Time: 0.005096 [10/04/2021-21:26:38] [V] [TRT] *************** Autotuning format combination: Float(10,1,1,1) -> Float(10,1) *************** [10/04/2021-21:26:38] [V] [TRT] --------------- Timing Runner: copied_squeeze_after_StatefulPartitionedCall/model/dense/BiasAdd (Shuffle) [10/04/2021-21:26:38] [V] [TRT] Setting a default quantization params because quantization data is missing for copied_squeeze_after_StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:39] [V] [TRT] Tactic: 0 Time: 0.004888 [10/04/2021-21:26:39] [V] [TRT] Setting a default quantization params because quantization data is missing for copied_squeeze_after_StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:39] [V] [TRT] Tactic: 1 Time: 0.011728 [10/04/2021-21:26:39] [V] [TRT] Fastest Tactic: 0 Time: 0.004888 [10/04/2021-21:26:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [10/04/2021-21:26:39] [V] [TRT] Adding reformat layer: Reformatted Input Tensor 1 to StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu (StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0) from Float(4096,64,8,1) to Float(128,64:32,8,1) [10/04/2021-21:26:39] [V] [TRT] Adding reformat layer: Reformatted Input Tensor 0 to StatefulPartitionedCall/model/global_average_pooling2d/Mean (StatefulPartitionedCall/model/activation_54/Relu:0) from Float(128,64:32,8,1) to Float(4096,64,8,1) [10/04/2021-21:26:39] [V] [TRT] Formats and tactics selection completed in 15.5767 seconds. [10/04/2021-21:26:39] [V] [TRT] After reformat layers: 64 layers [10/04/2021-21:26:39] [V] [TRT] Block size 16777216 [10/04/2021-21:26:39] [V] [TRT] Block size 32768 [10/04/2021-21:26:39] [V] [TRT] Block size 32768 [10/04/2021-21:26:39] [V] [TRT] Block size 32768 [10/04/2021-21:26:39] [V] [TRT] Total Activation Memory: 16875520 [10/04/2021-21:26:39] [I] [TRT] Detected 1 inputs and 1 output network tensors. [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 + QuantLinearNode__664_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd + StatefulPartitionedCall/model/activation_3/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 + QuantLinearNode__672_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd + StatefulPartitionedCall/model/add_1/add + StatefulPartitionedCall/model/activation_4/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 + QuantLinearNode__684_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd + StatefulPartitionedCall/model/activation_5/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 + QuantLinearNode__692_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd + StatefulPartitionedCall/model/add_2/add + StatefulPartitionedCall/model/activation_6/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 + QuantLinearNode__704_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd + StatefulPartitionedCall/model/activation_7/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 + QuantLinearNode__712_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd + StatefulPartitionedCall/model/add_3/add + StatefulPartitionedCall/model/activation_8/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 + QuantLinearNode__724_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd + StatefulPartitionedCall/model/activation_9/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 + QuantLinearNode__732_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd + StatefulPartitionedCall/model/add_4/add + StatefulPartitionedCall/model/activation_10/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 + QuantLinearNode__744_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd + StatefulPartitionedCall/model/activation_11/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 + QuantLinearNode__752_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd + StatefulPartitionedCall/model/add_5/add + StatefulPartitionedCall/model/activation_12/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 + QuantLinearNode__764_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd + StatefulPartitionedCall/model/activation_13/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 + QuantLinearNode__772_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd + StatefulPartitionedCall/model/add_6/add + StatefulPartitionedCall/model/activation_14/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 + QuantLinearNode__784_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd + StatefulPartitionedCall/model/activation_15/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 + QuantLinearNode__792_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd + StatefulPartitionedCall/model/add_7/add + StatefulPartitionedCall/model/activation_16/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 + QuantLinearNode__804_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd + StatefulPartitionedCall/model/activation_17/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 + QuantLinearNode__812_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd + StatefulPartitionedCall/model/add_8/add + StatefulPartitionedCall/model/activation_18/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize64x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 7585914864117166414 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r1s1 Tactic: -4693009430365516309 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 + QuantLinearNode__860_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd + StatefulPartitionedCall/model/add_10/add + StatefulPartitionedCall/model/activation_22/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 + QuantLinearNode__872_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd + StatefulPartitionedCall/model/activation_23/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 + QuantLinearNode__880_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd + StatefulPartitionedCall/model/add_11/add + StatefulPartitionedCall/model/activation_24/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 + QuantLinearNode__892_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd + StatefulPartitionedCall/model/activation_25/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 + QuantLinearNode__900_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd + StatefulPartitionedCall/model/add_12/add + StatefulPartitionedCall/model/activation_26/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 + QuantLinearNode__912_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd + StatefulPartitionedCall/model/activation_27/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 + QuantLinearNode__920_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd + StatefulPartitionedCall/model/add_13/add + StatefulPartitionedCall/model/activation_28/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 + QuantLinearNode__932_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd + StatefulPartitionedCall/model/activation_29/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 + QuantLinearNode__940_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd + StatefulPartitionedCall/model/add_14/add + StatefulPartitionedCall/model/activation_30/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 + QuantLinearNode__952_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd + StatefulPartitionedCall/model/activation_31/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 + QuantLinearNode__960_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd + StatefulPartitionedCall/model/add_15/add + StatefulPartitionedCall/model/activation_32/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 + QuantLinearNode__972_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd + StatefulPartitionedCall/model/activation_33/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 + QuantLinearNode__980_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd + StatefulPartitionedCall/model/add_16/add + StatefulPartitionedCall/model/activation_34/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 + QuantLinearNode__992_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd + StatefulPartitionedCall/model/activation_35/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 + QuantLinearNode__1000_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd + StatefulPartitionedCall/model/add_17/add + StatefulPartitionedCall/model/activation_36/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x32x64_stage1_warpsize2x1x1_g1_tensor8x8x16_t1r3s3 Tactic: 3768633326807889446 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r1s1 Tactic: -3726322024058434766 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 + QuantLinearNode__1048_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd + StatefulPartitionedCall/model/add_19/add + StatefulPartitionedCall/model/activation_40/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 + QuantLinearNode__1060_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd + StatefulPartitionedCall/model/activation_41/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 + QuantLinearNode__1068_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd + StatefulPartitionedCall/model/add_20/add + StatefulPartitionedCall/model/activation_42/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 + QuantLinearNode__1080_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd + StatefulPartitionedCall/model/activation_43/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 + QuantLinearNode__1088_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd + StatefulPartitionedCall/model/add_21/add + StatefulPartitionedCall/model/activation_44/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 + QuantLinearNode__1100_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd + StatefulPartitionedCall/model/activation_45/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 + QuantLinearNode__1108_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd + StatefulPartitionedCall/model/add_22/add + StatefulPartitionedCall/model/activation_46/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 + QuantLinearNode__1120_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd + StatefulPartitionedCall/model/activation_47/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 + QuantLinearNode__1128_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd + StatefulPartitionedCall/model/add_23/add + StatefulPartitionedCall/model/activation_48/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 + QuantLinearNode__1140_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd + StatefulPartitionedCall/model/activation_49/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 + QuantLinearNode__1148_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd + StatefulPartitionedCall/model/add_24/add + StatefulPartitionedCall/model/activation_50/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 + QuantLinearNode__1160_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd + StatefulPartitionedCall/model/activation_51/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 + QuantLinearNode__1168_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd + StatefulPartitionedCall/model/add_25/add + StatefulPartitionedCall/model/activation_52/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 + QuantLinearNode__1180_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd + StatefulPartitionedCall/model/activation_53/Relu Set Tactic Name: sm72_xmma_fprop_implicit_gemm_interleaved_i8i8_i8i32_f32_nchw_vect_c_32kcrs_vect_c_32_nchw_vect_c_32_tilesize32x64x64_stage1_warpsize2x2x1_g1_tensor8x8x16_t1r3s3 Tactic: 5105539492142133503 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: volta_fp32_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1 Tactic: -733152064595858464 [10/04/2021-21:26:39] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: sm70_xmma_fprop_conv1x1_f32f32_f32_f32_nchwkcrs_nchw_simt_small_batch_bias_relu Tactic: -6194327789991425125 [10/04/2021-21:26:39] [V] [TRT] Setting a default quantization params because quantization data is missing for QuantLinearNode__628_quantize_scale_node [10/04/2021-21:26:39] [V] [TRT] Setting a default quantization params because quantization data is missing for DequantLinearNode__1173_quantize_scale_node [10/04/2021-21:26:39] [V] [TRT] Setting a default quantization params because quantization data is missing for Reformatting CopyNode for Input Tensor 1 to StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu [10/04/2021-21:26:39] [V] [TRT] Setting a default quantization params because quantization data is missing for Reformatting CopyNode for Input Tensor 0 to StatefulPartitionedCall/model/global_average_pooling2d/Mean [10/04/2021-21:26:39] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/global_average_pooling2d/Mean [10/04/2021-21:26:39] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:26:39] [V] [TRT] Layer: QuantLinearNode__628_quantize_scale_node HostPersistent: 0 DevicePersistent: 0 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 + QuantLinearNode__664_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd + StatefulPartitionedCall/model/activation_3/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 + QuantLinearNode__672_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd + StatefulPartitionedCall/model/add_1/add + StatefulPartitionedCall/model/activation_4/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 + QuantLinearNode__684_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd + StatefulPartitionedCall/model/activation_5/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 + QuantLinearNode__692_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd + StatefulPartitionedCall/model/add_2/add + StatefulPartitionedCall/model/activation_6/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 + QuantLinearNode__704_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd + StatefulPartitionedCall/model/activation_7/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 + QuantLinearNode__712_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd + StatefulPartitionedCall/model/add_3/add + StatefulPartitionedCall/model/activation_8/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 + QuantLinearNode__724_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd + StatefulPartitionedCall/model/activation_9/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 + QuantLinearNode__732_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd + StatefulPartitionedCall/model/add_4/add + StatefulPartitionedCall/model/activation_10/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 + QuantLinearNode__744_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd + StatefulPartitionedCall/model/activation_11/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 + QuantLinearNode__752_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd + StatefulPartitionedCall/model/add_5/add + StatefulPartitionedCall/model/activation_12/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 + QuantLinearNode__764_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd + StatefulPartitionedCall/model/activation_13/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 + QuantLinearNode__772_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd + StatefulPartitionedCall/model/add_6/add + StatefulPartitionedCall/model/activation_14/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 + QuantLinearNode__784_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd + StatefulPartitionedCall/model/activation_15/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 + QuantLinearNode__792_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd + StatefulPartitionedCall/model/add_7/add + StatefulPartitionedCall/model/activation_16/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 + QuantLinearNode__804_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd + StatefulPartitionedCall/model/activation_17/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 + QuantLinearNode__812_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd + StatefulPartitionedCall/model/add_8/add + StatefulPartitionedCall/model/activation_18/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd HostPersistent: 2976 DevicePersistent: 1536 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 + QuantLinearNode__860_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd + StatefulPartitionedCall/model/add_10/add + StatefulPartitionedCall/model/activation_22/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 + QuantLinearNode__872_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd + StatefulPartitionedCall/model/activation_23/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 + QuantLinearNode__880_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd + StatefulPartitionedCall/model/add_11/add + StatefulPartitionedCall/model/activation_24/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 + QuantLinearNode__892_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd + StatefulPartitionedCall/model/activation_25/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 + QuantLinearNode__900_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd + StatefulPartitionedCall/model/add_12/add + StatefulPartitionedCall/model/activation_26/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 + QuantLinearNode__912_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd + StatefulPartitionedCall/model/activation_27/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 + QuantLinearNode__920_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd + StatefulPartitionedCall/model/add_13/add + StatefulPartitionedCall/model/activation_28/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 + QuantLinearNode__932_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd + StatefulPartitionedCall/model/activation_29/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 + QuantLinearNode__940_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd + StatefulPartitionedCall/model/add_14/add + StatefulPartitionedCall/model/activation_30/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 + QuantLinearNode__952_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd + StatefulPartitionedCall/model/activation_31/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 + QuantLinearNode__960_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd + StatefulPartitionedCall/model/add_15/add + StatefulPartitionedCall/model/activation_32/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 + QuantLinearNode__972_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd + StatefulPartitionedCall/model/activation_33/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 + QuantLinearNode__980_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd + StatefulPartitionedCall/model/add_16/add + StatefulPartitionedCall/model/activation_34/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 + QuantLinearNode__992_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd + StatefulPartitionedCall/model/activation_35/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 + QuantLinearNode__1000_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd + StatefulPartitionedCall/model/add_17/add + StatefulPartitionedCall/model/activation_36/Relu HostPersistent: 2976 DevicePersistent: 9728 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu HostPersistent: 2976 DevicePersistent: 19456 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd HostPersistent: 2976 DevicePersistent: 3072 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 + QuantLinearNode__1048_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd + StatefulPartitionedCall/model/add_19/add + StatefulPartitionedCall/model/activation_40/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 + QuantLinearNode__1060_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd + StatefulPartitionedCall/model/activation_41/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 + QuantLinearNode__1068_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd + StatefulPartitionedCall/model/add_20/add + StatefulPartitionedCall/model/activation_42/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 + QuantLinearNode__1080_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd + StatefulPartitionedCall/model/activation_43/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 + QuantLinearNode__1088_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd + StatefulPartitionedCall/model/add_21/add + StatefulPartitionedCall/model/activation_44/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 + QuantLinearNode__1100_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd + StatefulPartitionedCall/model/activation_45/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 + QuantLinearNode__1108_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd + StatefulPartitionedCall/model/add_22/add + StatefulPartitionedCall/model/activation_46/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 + QuantLinearNode__1120_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd + StatefulPartitionedCall/model/activation_47/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 + QuantLinearNode__1128_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd + StatefulPartitionedCall/model/add_23/add + StatefulPartitionedCall/model/activation_48/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 + QuantLinearNode__1140_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd + StatefulPartitionedCall/model/activation_49/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 + QuantLinearNode__1148_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd + StatefulPartitionedCall/model/add_24/add + StatefulPartitionedCall/model/activation_50/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 + QuantLinearNode__1160_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd + StatefulPartitionedCall/model/activation_51/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 + QuantLinearNode__1168_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd + StatefulPartitionedCall/model/add_25/add + StatefulPartitionedCall/model/activation_52/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: DequantLinearNode__1173_quantize_scale_node HostPersistent: 0 DevicePersistent: 0 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 + QuantLinearNode__1180_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd + StatefulPartitionedCall/model/activation_53/Relu HostPersistent: 2976 DevicePersistent: 37888 [10/04/2021-21:26:39] [V] [TRT] Layer: Reformatting CopyNode for Input Tensor 1 to StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu HostPersistent: 0 DevicePersistent: 0 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu HostPersistent: 1664 DevicePersistent: 38912 [10/04/2021-21:26:39] [V] [TRT] Layer: Reformatting CopyNode for Input Tensor 0 to StatefulPartitionedCall/model/global_average_pooling2d/Mean HostPersistent: 0 DevicePersistent: 0 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/global_average_pooling2d/Mean HostPersistent: 48 DevicePersistent: 0 [10/04/2021-21:26:39] [V] [TRT] Layer: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd HostPersistent: 1120 DevicePersistent: 3072 [10/04/2021-21:26:39] [I] [TRT] Total Host Persistent Memory: 169488 [10/04/2021-21:26:39] [I] [TRT] Total Device Persistent Memory: 1032192 [10/04/2021-21:26:39] [I] [TRT] Total Scratch Memory: 0 [10/04/2021-21:26:39] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 4 MiB, GPU 5 MiB [10/04/2021-21:26:39] [V] [TRT] Using cublas a tactic source [10/04/2021-21:26:39] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1365, GPU 3625 (MiB) [10/04/2021-21:26:39] [V] [TRT] Using cuDNN as a tactic source [10/04/2021-21:26:39] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +1, GPU +0, now: CPU 1366, GPU 3625 (MiB) [10/04/2021-21:26:39] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1365, GPU 3625 (MiB) [10/04/2021-21:26:39] [V] [TRT] Engine generation completed in 17.3364 seconds. [10/04/2021-21:26:39] [V] [TRT] Deleting timing cache: 67 entries, 346 hits [10/04/2021-21:26:39] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1365, GPU 3625 (MiB) [10/04/2021-21:26:39] [V] [TRT] Engine Layer Information: Layer(Scale): QuantLinearNode__628_quantize_scale_node, Tactic: 0, input_1[Float(1,3,32,32)] -> QuantLinearNode__628:0[Int8(1,3,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu, Tactic: 3768633326807889446, QuantLinearNode__628:0[Int8(1,3,32,32)] -> QuantLinearNode__640:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu, Tactic: 3768633326807889446, QuantLinearNode__640:0[Int8(1,16,32,32)] -> QuantLinearNode__648:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu, Tactic: 7585914864117166414, QuantLinearNode__648:0[Int8(1,16,32,32)], QuantLinearNode__640:0[Int8(1,16,32,32)] -> QuantLinearNode__660:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 + QuantLinearNode__664_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd + StatefulPartitionedCall/model/activation_3/Relu, Tactic: 3768633326807889446, QuantLinearNode__660:0[Int8(1,16,32,32)] -> QuantLinearNode__668:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 + QuantLinearNode__672_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd + StatefulPartitionedCall/model/add_1/add + StatefulPartitionedCall/model/activation_4/Relu, Tactic: 7585914864117166414, QuantLinearNode__668:0[Int8(1,16,32,32)], QuantLinearNode__660:0[Int8(1,16,32,32)] -> QuantLinearNode__680:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 + QuantLinearNode__684_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd + StatefulPartitionedCall/model/activation_5/Relu, Tactic: 3768633326807889446, QuantLinearNode__680:0[Int8(1,16,32,32)] -> QuantLinearNode__688:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 + QuantLinearNode__692_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd + StatefulPartitionedCall/model/add_2/add + StatefulPartitionedCall/model/activation_6/Relu, Tactic: 7585914864117166414, QuantLinearNode__688:0[Int8(1,16,32,32)], QuantLinearNode__680:0[Int8(1,16,32,32)] -> QuantLinearNode__700:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 + QuantLinearNode__704_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd + StatefulPartitionedCall/model/activation_7/Relu, Tactic: 3768633326807889446, QuantLinearNode__700:0[Int8(1,16,32,32)] -> QuantLinearNode__708:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 + QuantLinearNode__712_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd + StatefulPartitionedCall/model/add_3/add + StatefulPartitionedCall/model/activation_8/Relu, Tactic: 7585914864117166414, QuantLinearNode__708:0[Int8(1,16,32,32)], QuantLinearNode__700:0[Int8(1,16,32,32)] -> QuantLinearNode__720:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 + QuantLinearNode__724_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd + StatefulPartitionedCall/model/activation_9/Relu, Tactic: 3768633326807889446, QuantLinearNode__720:0[Int8(1,16,32,32)] -> QuantLinearNode__728:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 + QuantLinearNode__732_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd + StatefulPartitionedCall/model/add_4/add + StatefulPartitionedCall/model/activation_10/Relu, Tactic: 7585914864117166414, QuantLinearNode__728:0[Int8(1,16,32,32)], QuantLinearNode__720:0[Int8(1,16,32,32)] -> QuantLinearNode__740:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 + QuantLinearNode__744_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd + StatefulPartitionedCall/model/activation_11/Relu, Tactic: 3768633326807889446, QuantLinearNode__740:0[Int8(1,16,32,32)] -> QuantLinearNode__748:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 + QuantLinearNode__752_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd + StatefulPartitionedCall/model/add_5/add + StatefulPartitionedCall/model/activation_12/Relu, Tactic: 7585914864117166414, QuantLinearNode__748:0[Int8(1,16,32,32)], QuantLinearNode__740:0[Int8(1,16,32,32)] -> QuantLinearNode__760:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 + QuantLinearNode__764_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd + StatefulPartitionedCall/model/activation_13/Relu, Tactic: 3768633326807889446, QuantLinearNode__760:0[Int8(1,16,32,32)] -> QuantLinearNode__768:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 + QuantLinearNode__772_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd + StatefulPartitionedCall/model/add_6/add + StatefulPartitionedCall/model/activation_14/Relu, Tactic: 7585914864117166414, QuantLinearNode__768:0[Int8(1,16,32,32)], QuantLinearNode__760:0[Int8(1,16,32,32)] -> QuantLinearNode__780:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 + QuantLinearNode__784_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd + StatefulPartitionedCall/model/activation_15/Relu, Tactic: 3768633326807889446, QuantLinearNode__780:0[Int8(1,16,32,32)] -> QuantLinearNode__788:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 + QuantLinearNode__792_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd + StatefulPartitionedCall/model/add_7/add + StatefulPartitionedCall/model/activation_16/Relu, Tactic: 7585914864117166414, QuantLinearNode__788:0[Int8(1,16,32,32)], QuantLinearNode__780:0[Int8(1,16,32,32)] -> QuantLinearNode__800:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 + QuantLinearNode__804_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd + StatefulPartitionedCall/model/activation_17/Relu, Tactic: 3768633326807889446, QuantLinearNode__800:0[Int8(1,16,32,32)] -> QuantLinearNode__808:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 + QuantLinearNode__812_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd + StatefulPartitionedCall/model/add_8/add + StatefulPartitionedCall/model/activation_18/Relu, Tactic: 7585914864117166414, QuantLinearNode__808:0[Int8(1,16,32,32)], QuantLinearNode__800:0[Int8(1,16,32,32)] -> QuantLinearNode__828:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu, Tactic: 5105539492142133503, QuantLinearNode__828:0[Int8(1,16,32,32)] -> QuantLinearNode__836:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd, Tactic: -4693009430365516309, QuantLinearNode__828:0[Int8(1,16,32,32)] -> QuantLinearNode__824:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu, Tactic: 3768633326807889446, QuantLinearNode__836:0[Int8(1,32,16,16)], QuantLinearNode__824:0[Int8(1,32,16,16)] -> QuantLinearNode__848:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu, Tactic: 3768633326807889446, QuantLinearNode__848:0[Int8(1,32,16,16)] -> QuantLinearNode__856:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 + QuantLinearNode__860_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd + StatefulPartitionedCall/model/add_10/add + StatefulPartitionedCall/model/activation_22/Relu, Tactic: 3768633326807889446, QuantLinearNode__856:0[Int8(1,32,16,16)], QuantLinearNode__848:0[Int8(1,32,16,16)] -> QuantLinearNode__868:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 + QuantLinearNode__872_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd + StatefulPartitionedCall/model/activation_23/Relu, Tactic: 3768633326807889446, QuantLinearNode__868:0[Int8(1,32,16,16)] -> QuantLinearNode__876:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 + QuantLinearNode__880_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd + StatefulPartitionedCall/model/add_11/add + StatefulPartitionedCall/model/activation_24/Relu, Tactic: 3768633326807889446, QuantLinearNode__876:0[Int8(1,32,16,16)], QuantLinearNode__868:0[Int8(1,32,16,16)] -> QuantLinearNode__888:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 + QuantLinearNode__892_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd + StatefulPartitionedCall/model/activation_25/Relu, Tactic: 3768633326807889446, QuantLinearNode__888:0[Int8(1,32,16,16)] -> QuantLinearNode__896:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 + QuantLinearNode__900_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd + StatefulPartitionedCall/model/add_12/add + StatefulPartitionedCall/model/activation_26/Relu, Tactic: 3768633326807889446, QuantLinearNode__896:0[Int8(1,32,16,16)], QuantLinearNode__888:0[Int8(1,32,16,16)] -> QuantLinearNode__908:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 + QuantLinearNode__912_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd + StatefulPartitionedCall/model/activation_27/Relu, Tactic: 3768633326807889446, QuantLinearNode__908:0[Int8(1,32,16,16)] -> QuantLinearNode__916:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 + QuantLinearNode__920_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd + StatefulPartitionedCall/model/add_13/add + StatefulPartitionedCall/model/activation_28/Relu, Tactic: 3768633326807889446, QuantLinearNode__916:0[Int8(1,32,16,16)], QuantLinearNode__908:0[Int8(1,32,16,16)] -> QuantLinearNode__928:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 + QuantLinearNode__932_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd + StatefulPartitionedCall/model/activation_29/Relu, Tactic: 3768633326807889446, QuantLinearNode__928:0[Int8(1,32,16,16)] -> QuantLinearNode__936:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 + QuantLinearNode__940_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd + StatefulPartitionedCall/model/add_14/add + StatefulPartitionedCall/model/activation_30/Relu, Tactic: 3768633326807889446, QuantLinearNode__936:0[Int8(1,32,16,16)], QuantLinearNode__928:0[Int8(1,32,16,16)] -> QuantLinearNode__948:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 + QuantLinearNode__952_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd + StatefulPartitionedCall/model/activation_31/Relu, Tactic: 3768633326807889446, QuantLinearNode__948:0[Int8(1,32,16,16)] -> QuantLinearNode__956:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 + QuantLinearNode__960_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd + StatefulPartitionedCall/model/add_15/add + StatefulPartitionedCall/model/activation_32/Relu, Tactic: 3768633326807889446, QuantLinearNode__956:0[Int8(1,32,16,16)], QuantLinearNode__948:0[Int8(1,32,16,16)] -> QuantLinearNode__968:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 + QuantLinearNode__972_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd + StatefulPartitionedCall/model/activation_33/Relu, Tactic: 3768633326807889446, QuantLinearNode__968:0[Int8(1,32,16,16)] -> QuantLinearNode__976:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 + QuantLinearNode__980_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd + StatefulPartitionedCall/model/add_16/add + StatefulPartitionedCall/model/activation_34/Relu, Tactic: 3768633326807889446, QuantLinearNode__976:0[Int8(1,32,16,16)], QuantLinearNode__968:0[Int8(1,32,16,16)] -> QuantLinearNode__988:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 + QuantLinearNode__992_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd + StatefulPartitionedCall/model/activation_35/Relu, Tactic: 3768633326807889446, QuantLinearNode__988:0[Int8(1,32,16,16)] -> QuantLinearNode__996:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 + QuantLinearNode__1000_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd + StatefulPartitionedCall/model/add_17/add + StatefulPartitionedCall/model/activation_36/Relu, Tactic: 3768633326807889446, QuantLinearNode__996:0[Int8(1,32,16,16)], QuantLinearNode__988:0[Int8(1,32,16,16)] -> QuantLinearNode__1016:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu, Tactic: 5105539492142133503, QuantLinearNode__1016:0[Int8(1,32,16,16)] -> QuantLinearNode__1024:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd, Tactic: -3726322024058434766, QuantLinearNode__1016:0[Int8(1,32,16,16)] -> QuantLinearNode__1012:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu, Tactic: 5105539492142133503, QuantLinearNode__1024:0[Int8(1,64,8,8)], QuantLinearNode__1012:0[Int8(1,64,8,8)] -> QuantLinearNode__1036:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu, Tactic: 5105539492142133503, QuantLinearNode__1036:0[Int8(1,64,8,8)] -> QuantLinearNode__1044:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 + QuantLinearNode__1048_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd + StatefulPartitionedCall/model/add_19/add + StatefulPartitionedCall/model/activation_40/Relu, Tactic: 5105539492142133503, QuantLinearNode__1044:0[Int8(1,64,8,8)], QuantLinearNode__1036:0[Int8(1,64,8,8)] -> QuantLinearNode__1056:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 + QuantLinearNode__1060_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd + StatefulPartitionedCall/model/activation_41/Relu, Tactic: 5105539492142133503, QuantLinearNode__1056:0[Int8(1,64,8,8)] -> QuantLinearNode__1064:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 + QuantLinearNode__1068_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd + StatefulPartitionedCall/model/add_20/add + StatefulPartitionedCall/model/activation_42/Relu, Tactic: 5105539492142133503, QuantLinearNode__1064:0[Int8(1,64,8,8)], QuantLinearNode__1056:0[Int8(1,64,8,8)] -> QuantLinearNode__1076:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 + QuantLinearNode__1080_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd + StatefulPartitionedCall/model/activation_43/Relu, Tactic: 5105539492142133503, QuantLinearNode__1076:0[Int8(1,64,8,8)] -> QuantLinearNode__1084:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 + QuantLinearNode__1088_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd + StatefulPartitionedCall/model/add_21/add + StatefulPartitionedCall/model/activation_44/Relu, Tactic: 5105539492142133503, QuantLinearNode__1084:0[Int8(1,64,8,8)], QuantLinearNode__1076:0[Int8(1,64,8,8)] -> QuantLinearNode__1096:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 + QuantLinearNode__1100_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd + StatefulPartitionedCall/model/activation_45/Relu, Tactic: 5105539492142133503, QuantLinearNode__1096:0[Int8(1,64,8,8)] -> QuantLinearNode__1104:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 + QuantLinearNode__1108_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd + StatefulPartitionedCall/model/add_22/add + StatefulPartitionedCall/model/activation_46/Relu, Tactic: 5105539492142133503, QuantLinearNode__1104:0[Int8(1,64,8,8)], QuantLinearNode__1096:0[Int8(1,64,8,8)] -> QuantLinearNode__1116:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 + QuantLinearNode__1120_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd + StatefulPartitionedCall/model/activation_47/Relu, Tactic: 5105539492142133503, QuantLinearNode__1116:0[Int8(1,64,8,8)] -> QuantLinearNode__1124:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 + QuantLinearNode__1128_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd + StatefulPartitionedCall/model/add_23/add + StatefulPartitionedCall/model/activation_48/Relu, Tactic: 5105539492142133503, QuantLinearNode__1124:0[Int8(1,64,8,8)], QuantLinearNode__1116:0[Int8(1,64,8,8)] -> QuantLinearNode__1136:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 + QuantLinearNode__1140_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd + StatefulPartitionedCall/model/activation_49/Relu, Tactic: 5105539492142133503, QuantLinearNode__1136:0[Int8(1,64,8,8)] -> QuantLinearNode__1144:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 + QuantLinearNode__1148_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd + StatefulPartitionedCall/model/add_24/add + StatefulPartitionedCall/model/activation_50/Relu, Tactic: 5105539492142133503, QuantLinearNode__1144:0[Int8(1,64,8,8)], QuantLinearNode__1136:0[Int8(1,64,8,8)] -> QuantLinearNode__1156:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 + QuantLinearNode__1160_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd + StatefulPartitionedCall/model/activation_51/Relu, Tactic: 5105539492142133503, QuantLinearNode__1156:0[Int8(1,64,8,8)] -> QuantLinearNode__1164:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 + QuantLinearNode__1168_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd + StatefulPartitionedCall/model/add_25/add + StatefulPartitionedCall/model/activation_52/Relu, Tactic: 5105539492142133503, QuantLinearNode__1164:0[Int8(1,64,8,8)], QuantLinearNode__1156:0[Int8(1,64,8,8)] -> QuantLinearNode__1176:0[Int8(1,64,8,8)] Layer(Scale): DequantLinearNode__1173_quantize_scale_node, Tactic: 0, QuantLinearNode__1176:0[Int8(1,64,8,8)] -> StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0[Float(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 + QuantLinearNode__1180_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd + StatefulPartitionedCall/model/activation_53/Relu, Tactic: 5105539492142133503, QuantLinearNode__1176:0[Int8(1,64,8,8)] -> QuantLinearNode__1184:0[Int8(1,64,8,8)] Layer(Reformat): Reformatting CopyNode for Input Tensor 1 to StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu, Tactic: 0, StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0[Float(1,64,8,8)] -> Reformatted Input Tensor 1 to StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu[Float(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu, Tactic: -733152064595858464, QuantLinearNode__1184:0[Int8(1,64,8,8)], Reformatted Input Tensor 1 to StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu[Float(1,64,8,8)] -> StatefulPartitionedCall/model/activation_54/Relu:0[Float(1,64,8,8)] Layer(Reformat): Reformatting CopyNode for Input Tensor 0 to StatefulPartitionedCall/model/global_average_pooling2d/Mean, Tactic: 0, StatefulPartitionedCall/model/activation_54/Relu:0[Float(1,64,8,8)] -> Reformatted Input Tensor 0 to StatefulPartitionedCall/model/global_average_pooling2d/Mean[Float(1,64,8,8)] Layer(CudnnPooling): StatefulPartitionedCall/model/global_average_pooling2d/Mean, Tactic: -1, Reformatted Input Tensor 0 to StatefulPartitionedCall/model/global_average_pooling2d/Mean[Float(1,64,8,8)] -> StatefulPartitionedCall/model/global_average_pooling2d/Mean:0[Float(1,64,1,1)] Layer(CaskConvolution): StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd, Tactic: -6194327789991425125, StatefulPartitionedCall/model/global_average_pooling2d/Mean:0[Float(1,64,1,1)] -> StatefulPartitionedCall/model/dense/BiasAdd_out_tensor[Float(1,10,1,1)] [10/04/2021-21:26:39] [I] [TRT] [MemUsageSnapshot] Builder end: CPU 1361 MiB, GPU 3625 MiB [10/04/2021-21:26:39] [I] [TRT] Loaded engine size: 1 MB [10/04/2021-21:26:39] [I] [TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 1361 MiB, GPU 3625 MiB [10/04/2021-21:26:39] [V] [TRT] Using cublas a tactic source [10/04/2021-21:26:39] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1362, GPU 3625 (MiB) [10/04/2021-21:26:39] [V] [TRT] Using cuDNN as a tactic source [10/04/2021-21:26:39] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1362, GPU 3625 (MiB) [10/04/2021-21:26:39] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1362, GPU 3625 (MiB) [10/04/2021-21:26:39] [V] [TRT] Deserialization required 13303 microseconds. [10/04/2021-21:26:39] [I] [TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 1362 MiB, GPU 3625 MiB [10/04/2021-21:26:39] [I] Engine built in 20.115 sec. [10/04/2021-21:26:39] [I] [TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 1354 MiB, GPU 3625 MiB [10/04/2021-21:26:39] [V] [TRT] Using cublas a tactic source [10/04/2021-21:26:39] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1354, GPU 3625 (MiB) [10/04/2021-21:26:39] [V] [TRT] Using cuDNN as a tactic source [10/04/2021-21:26:39] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1354, GPU 3625 (MiB) [10/04/2021-21:26:39] [V] [TRT] Total per-runner device memory is 1032192 [10/04/2021-21:26:39] [V] [TRT] Total per-runner host memory is 169488 [10/04/2021-21:26:39] [V] [TRT] Allocated activation device memory of size 98304 [10/04/2021-21:26:39] [I] [TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 1354 MiB, GPU 3625 MiB [10/04/2021-21:26:39] [I] Created input binding for input_1 with dimensions 1x3x32x32 [10/04/2021-21:26:39] [I] Created output binding for dense with dimensions 1x10 [10/04/2021-21:26:39] [I] Starting inference [10/04/2021-21:26:42] [I] Warmup completed 246 queries over 200 ms [10/04/2021-21:26:42] [I] Timing trace has 3785 queries over 3.00149 s [10/04/2021-21:26:42] [I] [10/04/2021-21:26:42] [I] === Trace details === [10/04/2021-21:26:42] [I] Trace averages of 10 runs: [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791225 ms - Host latency: 0.791225 ms (end to end 0.791225 ms, enqueue 0.0922531 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792848 ms - Host latency: 0.792848 ms (end to end 0.792848 ms, enqueue 0.0718658 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792082 ms - Host latency: 0.792082 ms (end to end 0.792082 ms, enqueue 0.0686996 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791859 ms - Host latency: 0.791859 ms (end to end 0.791859 ms, enqueue 0.0699173 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792761 ms - Host latency: 0.792761 ms (end to end 0.792761 ms, enqueue 0.0688202 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.790982 ms - Host latency: 0.790982 ms (end to end 0.790982 ms, enqueue 0.0700897 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792819 ms - Host latency: 0.792819 ms (end to end 0.792819 ms, enqueue 0.0716003 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792563 ms - Host latency: 0.792563 ms (end to end 0.792563 ms, enqueue 0.0680695 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792056 ms - Host latency: 0.792056 ms (end to end 0.792056 ms, enqueue 0.0725098 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792099 ms - Host latency: 0.792099 ms (end to end 0.792099 ms, enqueue 0.0688141 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791107 ms - Host latency: 0.791107 ms (end to end 0.791107 ms, enqueue 0.0731354 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792181 ms - Host latency: 0.792181 ms (end to end 0.792181 ms, enqueue 0.0707947 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791763 ms - Host latency: 0.791763 ms (end to end 0.791763 ms, enqueue 0.071759 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.790906 ms - Host latency: 0.790906 ms (end to end 0.790906 ms, enqueue 0.0685364 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792917 ms - Host latency: 0.792917 ms (end to end 0.792917 ms, enqueue 0.0682098 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791763 ms - Host latency: 0.791763 ms (end to end 0.791763 ms, enqueue 0.0695496 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792029 ms - Host latency: 0.792029 ms (end to end 0.792029 ms, enqueue 0.0926117 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792044 ms - Host latency: 0.792044 ms (end to end 0.792044 ms, enqueue 0.0738373 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792648 ms - Host latency: 0.792648 ms (end to end 0.792648 ms, enqueue 0.0730347 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792636 ms - Host latency: 0.792636 ms (end to end 0.792636 ms, enqueue 0.0709167 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791846 ms - Host latency: 0.791846 ms (end to end 0.791846 ms, enqueue 0.0690796 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793207 ms - Host latency: 0.793207 ms (end to end 0.793207 ms, enqueue 0.0694305 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792584 ms - Host latency: 0.792584 ms (end to end 0.792584 ms, enqueue 0.0698212 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791098 ms - Host latency: 0.791098 ms (end to end 0.791098 ms, enqueue 0.0716614 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792667 ms - Host latency: 0.792667 ms (end to end 0.792667 ms, enqueue 0.0694641 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792935 ms - Host latency: 0.792935 ms (end to end 0.792935 ms, enqueue 0.0690002 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792258 ms - Host latency: 0.792258 ms (end to end 0.792258 ms, enqueue 0.0681274 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792093 ms - Host latency: 0.792093 ms (end to end 0.792093 ms, enqueue 0.0685516 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791821 ms - Host latency: 0.791821 ms (end to end 0.791821 ms, enqueue 0.0712067 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792789 ms - Host latency: 0.792789 ms (end to end 0.792789 ms, enqueue 0.0704468 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79295 ms - Host latency: 0.79295 ms (end to end 0.79295 ms, enqueue 0.0692352 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791278 ms - Host latency: 0.791278 ms (end to end 0.791278 ms, enqueue 0.0770355 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792087 ms - Host latency: 0.792087 ms (end to end 0.792087 ms, enqueue 0.069455 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792148 ms - Host latency: 0.792148 ms (end to end 0.792148 ms, enqueue 0.0697357 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791916 ms - Host latency: 0.791916 ms (end to end 0.791916 ms, enqueue 0.0715485 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792303 ms - Host latency: 0.792303 ms (end to end 0.792303 ms, enqueue 0.070282 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792542 ms - Host latency: 0.792542 ms (end to end 0.792542 ms, enqueue 0.0680725 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791422 ms - Host latency: 0.791422 ms (end to end 0.791422 ms, enqueue 0.0687134 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792349 ms - Host latency: 0.792349 ms (end to end 0.792349 ms, enqueue 0.0683685 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792792 ms - Host latency: 0.792792 ms (end to end 0.792792 ms, enqueue 0.0667877 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792047 ms - Host latency: 0.792047 ms (end to end 0.792047 ms, enqueue 0.068573 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791656 ms - Host latency: 0.791656 ms (end to end 0.791656 ms, enqueue 0.0716736 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791321 ms - Host latency: 0.791321 ms (end to end 0.791321 ms, enqueue 0.0694092 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792438 ms - Host latency: 0.792438 ms (end to end 0.792438 ms, enqueue 0.068103 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792554 ms - Host latency: 0.792554 ms (end to end 0.792554 ms, enqueue 0.0673218 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791724 ms - Host latency: 0.791724 ms (end to end 0.791724 ms, enqueue 0.070575 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791309 ms - Host latency: 0.791309 ms (end to end 0.791309 ms, enqueue 0.0773621 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792163 ms - Host latency: 0.792163 ms (end to end 0.792163 ms, enqueue 0.0681885 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791644 ms - Host latency: 0.791644 ms (end to end 0.791644 ms, enqueue 0.0682739 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792694 ms - Host latency: 0.792694 ms (end to end 0.792694 ms, enqueue 0.068512 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792078 ms - Host latency: 0.792078 ms (end to end 0.792078 ms, enqueue 0.0683838 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793146 ms - Host latency: 0.793146 ms (end to end 0.793146 ms, enqueue 0.0687866 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79198 ms - Host latency: 0.79198 ms (end to end 0.79198 ms, enqueue 0.0669922 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.7935 ms - Host latency: 0.7935 ms (end to end 0.7935 ms, enqueue 0.0688355 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792383 ms - Host latency: 0.792383 ms (end to end 0.792383 ms, enqueue 0.0834717 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792279 ms - Host latency: 0.792279 ms (end to end 0.792279 ms, enqueue 0.0675476 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791516 ms - Host latency: 0.791516 ms (end to end 0.791516 ms, enqueue 0.0689392 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792651 ms - Host latency: 0.792651 ms (end to end 0.792651 ms, enqueue 0.0683167 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792572 ms - Host latency: 0.792572 ms (end to end 0.792572 ms, enqueue 0.0737244 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792554 ms - Host latency: 0.792554 ms (end to end 0.792554 ms, enqueue 0.0672363 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793243 ms - Host latency: 0.793243 ms (end to end 0.793243 ms, enqueue 0.0712036 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791254 ms - Host latency: 0.791254 ms (end to end 0.791254 ms, enqueue 0.0667541 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793103 ms - Host latency: 0.793103 ms (end to end 0.793103 ms, enqueue 0.0681702 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791827 ms - Host latency: 0.791827 ms (end to end 0.791827 ms, enqueue 0.0671326 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791565 ms - Host latency: 0.791565 ms (end to end 0.791565 ms, enqueue 0.0683838 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791687 ms - Host latency: 0.791687 ms (end to end 0.791687 ms, enqueue 0.068042 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791827 ms - Host latency: 0.791827 ms (end to end 0.791827 ms, enqueue 0.0671814 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791614 ms - Host latency: 0.791614 ms (end to end 0.791614 ms, enqueue 0.0680664 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792084 ms - Host latency: 0.792084 ms (end to end 0.792084 ms, enqueue 0.0703064 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791772 ms - Host latency: 0.791772 ms (end to end 0.791772 ms, enqueue 0.0771912 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792218 ms - Host latency: 0.792218 ms (end to end 0.792218 ms, enqueue 0.0742493 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791705 ms - Host latency: 0.791705 ms (end to end 0.791705 ms, enqueue 0.0699341 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792139 ms - Host latency: 0.792139 ms (end to end 0.792139 ms, enqueue 0.0689087 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79245 ms - Host latency: 0.79245 ms (end to end 0.79245 ms, enqueue 0.0725647 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793402 ms - Host latency: 0.793402 ms (end to end 0.793402 ms, enqueue 0.0716126 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792499 ms - Host latency: 0.792499 ms (end to end 0.792499 ms, enqueue 0.068866 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791821 ms - Host latency: 0.791821 ms (end to end 0.791821 ms, enqueue 0.0702209 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79223 ms - Host latency: 0.79223 ms (end to end 0.79223 ms, enqueue 0.0696777 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792468 ms - Host latency: 0.792468 ms (end to end 0.792468 ms, enqueue 0.0701721 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792102 ms - Host latency: 0.792102 ms (end to end 0.792102 ms, enqueue 0.0700317 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792419 ms - Host latency: 0.792419 ms (end to end 0.792419 ms, enqueue 0.0710999 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792401 ms - Host latency: 0.792401 ms (end to end 0.792401 ms, enqueue 0.0728943 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791553 ms - Host latency: 0.791553 ms (end to end 0.791553 ms, enqueue 0.0678894 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79198 ms - Host latency: 0.79198 ms (end to end 0.79198 ms, enqueue 0.0687683 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792999 ms - Host latency: 0.792999 ms (end to end 0.792999 ms, enqueue 0.069635 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792725 ms - Host latency: 0.792725 ms (end to end 0.792725 ms, enqueue 0.0693115 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791577 ms - Host latency: 0.791577 ms (end to end 0.791577 ms, enqueue 0.0675842 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.790979 ms - Host latency: 0.790979 ms (end to end 0.790979 ms, enqueue 0.0686584 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791626 ms - Host latency: 0.791626 ms (end to end 0.791626 ms, enqueue 0.0682434 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791968 ms - Host latency: 0.791968 ms (end to end 0.791968 ms, enqueue 0.0700928 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792206 ms - Host latency: 0.792206 ms (end to end 0.792206 ms, enqueue 0.0684814 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791071 ms - Host latency: 0.791071 ms (end to end 0.791071 ms, enqueue 0.0720032 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791687 ms - Host latency: 0.791687 ms (end to end 0.791687 ms, enqueue 0.0775269 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79245 ms - Host latency: 0.79245 ms (end to end 0.79245 ms, enqueue 0.0697876 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792102 ms - Host latency: 0.792102 ms (end to end 0.792102 ms, enqueue 0.0684021 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791895 ms - Host latency: 0.791895 ms (end to end 0.791895 ms, enqueue 0.0674622 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79201 ms - Host latency: 0.79201 ms (end to end 0.79201 ms, enqueue 0.0684814 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791187 ms - Host latency: 0.791187 ms (end to end 0.791187 ms, enqueue 0.0691772 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79176 ms - Host latency: 0.79176 ms (end to end 0.79176 ms, enqueue 0.0672974 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792584 ms - Host latency: 0.792584 ms (end to end 0.792584 ms, enqueue 0.0678467 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791754 ms - Host latency: 0.791754 ms (end to end 0.791754 ms, enqueue 0.0690491 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791626 ms - Host latency: 0.791626 ms (end to end 0.791626 ms, enqueue 0.068866 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791742 ms - Host latency: 0.791742 ms (end to end 0.791742 ms, enqueue 0.0706177 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79339 ms - Host latency: 0.79339 ms (end to end 0.79339 ms, enqueue 0.0702637 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792236 ms - Host latency: 0.792236 ms (end to end 0.792236 ms, enqueue 0.0687622 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792627 ms - Host latency: 0.792627 ms (end to end 0.792627 ms, enqueue 0.0671875 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791528 ms - Host latency: 0.791528 ms (end to end 0.791528 ms, enqueue 0.0688355 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792078 ms - Host latency: 0.792078 ms (end to end 0.792078 ms, enqueue 0.0683838 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79259 ms - Host latency: 0.79259 ms (end to end 0.79259 ms, enqueue 0.0682495 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792383 ms - Host latency: 0.792383 ms (end to end 0.792383 ms, enqueue 0.0671631 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791675 ms - Host latency: 0.791675 ms (end to end 0.791675 ms, enqueue 0.0688232 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792627 ms - Host latency: 0.792627 ms (end to end 0.792627 ms, enqueue 0.0697388 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791187 ms - Host latency: 0.791187 ms (end to end 0.791187 ms, enqueue 0.0668823 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792651 ms - Host latency: 0.792651 ms (end to end 0.792651 ms, enqueue 0.0682495 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792566 ms - Host latency: 0.792566 ms (end to end 0.792566 ms, enqueue 0.0698364 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791699 ms - Host latency: 0.791699 ms (end to end 0.791699 ms, enqueue 0.070166 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792163 ms - Host latency: 0.792163 ms (end to end 0.792163 ms, enqueue 0.0695923 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793286 ms - Host latency: 0.793286 ms (end to end 0.793286 ms, enqueue 0.0690552 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792151 ms - Host latency: 0.792151 ms (end to end 0.792151 ms, enqueue 0.070227 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792236 ms - Host latency: 0.792236 ms (end to end 0.792236 ms, enqueue 0.0685547 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.790698 ms - Host latency: 0.790698 ms (end to end 0.790698 ms, enqueue 0.0680786 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793274 ms - Host latency: 0.793274 ms (end to end 0.793274 ms, enqueue 0.0693726 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792822 ms - Host latency: 0.792822 ms (end to end 0.792822 ms, enqueue 0.0675781 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791663 ms - Host latency: 0.791663 ms (end to end 0.791663 ms, enqueue 0.0672729 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793457 ms - Host latency: 0.793457 ms (end to end 0.793457 ms, enqueue 0.0700562 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793201 ms - Host latency: 0.793201 ms (end to end 0.793201 ms, enqueue 0.0678345 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792712 ms - Host latency: 0.792712 ms (end to end 0.792712 ms, enqueue 0.072876 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792944 ms - Host latency: 0.792944 ms (end to end 0.792944 ms, enqueue 0.0675903 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792468 ms - Host latency: 0.792468 ms (end to end 0.792468 ms, enqueue 0.0685913 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793787 ms - Host latency: 0.793787 ms (end to end 0.793787 ms, enqueue 0.069043 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793249 ms - Host latency: 0.793249 ms (end to end 0.793249 ms, enqueue 0.0684692 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792468 ms - Host latency: 0.792468 ms (end to end 0.792468 ms, enqueue 0.0681396 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792358 ms - Host latency: 0.792358 ms (end to end 0.792358 ms, enqueue 0.0680176 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.790625 ms - Host latency: 0.790625 ms (end to end 0.790625 ms, enqueue 0.0705688 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792383 ms - Host latency: 0.792383 ms (end to end 0.792383 ms, enqueue 0.0682617 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792517 ms - Host latency: 0.792517 ms (end to end 0.792517 ms, enqueue 0.0711792 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792529 ms - Host latency: 0.792529 ms (end to end 0.792529 ms, enqueue 0.0687012 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792651 ms - Host latency: 0.792651 ms (end to end 0.792651 ms, enqueue 0.0679199 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791589 ms - Host latency: 0.791589 ms (end to end 0.791589 ms, enqueue 0.0701416 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79032 ms - Host latency: 0.79032 ms (end to end 0.79032 ms, enqueue 0.068335 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79248 ms - Host latency: 0.79248 ms (end to end 0.79248 ms, enqueue 0.068396 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79209 ms - Host latency: 0.79209 ms (end to end 0.79209 ms, enqueue 0.0681641 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792639 ms - Host latency: 0.792639 ms (end to end 0.792639 ms, enqueue 0.0680054 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792749 ms - Host latency: 0.792749 ms (end to end 0.792749 ms, enqueue 0.076001 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792554 ms - Host latency: 0.792554 ms (end to end 0.792554 ms, enqueue 0.0676147 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79187 ms - Host latency: 0.79187 ms (end to end 0.79187 ms, enqueue 0.0680908 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792212 ms - Host latency: 0.792212 ms (end to end 0.792212 ms, enqueue 0.0671021 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792371 ms - Host latency: 0.792371 ms (end to end 0.792371 ms, enqueue 0.0682129 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791931 ms - Host latency: 0.791931 ms (end to end 0.791931 ms, enqueue 0.0877319 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792175 ms - Host latency: 0.792175 ms (end to end 0.792175 ms, enqueue 0.0715942 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792065 ms - Host latency: 0.792065 ms (end to end 0.792065 ms, enqueue 0.0703125 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791943 ms - Host latency: 0.791943 ms (end to end 0.791943 ms, enqueue 0.0670288 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79082 ms - Host latency: 0.79082 ms (end to end 0.79082 ms, enqueue 0.0681274 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792468 ms - Host latency: 0.792468 ms (end to end 0.792468 ms, enqueue 0.0679321 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792712 ms - Host latency: 0.792712 ms (end to end 0.792712 ms, enqueue 0.0688721 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791406 ms - Host latency: 0.791406 ms (end to end 0.791406 ms, enqueue 0.0682251 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791968 ms - Host latency: 0.791968 ms (end to end 0.791968 ms, enqueue 0.0687012 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791199 ms - Host latency: 0.791199 ms (end to end 0.791199 ms, enqueue 0.0675537 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79137 ms - Host latency: 0.79137 ms (end to end 0.79137 ms, enqueue 0.0689819 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792773 ms - Host latency: 0.792773 ms (end to end 0.792773 ms, enqueue 0.0671875 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792554 ms - Host latency: 0.792554 ms (end to end 0.792554 ms, enqueue 0.0725098 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792004 ms - Host latency: 0.792004 ms (end to end 0.792004 ms, enqueue 0.0718506 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792175 ms - Host latency: 0.792175 ms (end to end 0.792175 ms, enqueue 0.0678589 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792102 ms - Host latency: 0.792102 ms (end to end 0.792102 ms, enqueue 0.0686157 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791406 ms - Host latency: 0.791406 ms (end to end 0.791406 ms, enqueue 0.0675537 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793604 ms - Host latency: 0.793604 ms (end to end 0.793604 ms, enqueue 0.068689 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792529 ms - Host latency: 0.792529 ms (end to end 0.792529 ms, enqueue 0.0674072 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791785 ms - Host latency: 0.791785 ms (end to end 0.791785 ms, enqueue 0.0695068 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792346 ms - Host latency: 0.792346 ms (end to end 0.792346 ms, enqueue 0.0683472 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792322 ms - Host latency: 0.792322 ms (end to end 0.792322 ms, enqueue 0.0685547 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792444 ms - Host latency: 0.792444 ms (end to end 0.792444 ms, enqueue 0.06875 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792285 ms - Host latency: 0.792285 ms (end to end 0.792285 ms, enqueue 0.0698853 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791211 ms - Host latency: 0.791211 ms (end to end 0.791211 ms, enqueue 0.0776978 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792126 ms - Host latency: 0.792126 ms (end to end 0.792126 ms, enqueue 0.0669922 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792676 ms - Host latency: 0.792676 ms (end to end 0.792676 ms, enqueue 0.0696167 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791394 ms - Host latency: 0.791394 ms (end to end 0.791394 ms, enqueue 0.0681274 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792273 ms - Host latency: 0.792273 ms (end to end 0.792273 ms, enqueue 0.0681274 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791333 ms - Host latency: 0.791333 ms (end to end 0.791333 ms, enqueue 0.0682373 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792041 ms - Host latency: 0.792041 ms (end to end 0.792041 ms, enqueue 0.0671509 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791907 ms - Host latency: 0.791907 ms (end to end 0.791907 ms, enqueue 0.0687134 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792102 ms - Host latency: 0.792102 ms (end to end 0.792102 ms, enqueue 0.069104 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791919 ms - Host latency: 0.791919 ms (end to end 0.791919 ms, enqueue 0.0675903 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791064 ms - Host latency: 0.791064 ms (end to end 0.791064 ms, enqueue 0.0680176 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792908 ms - Host latency: 0.792908 ms (end to end 0.792908 ms, enqueue 0.0750366 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792554 ms - Host latency: 0.792554 ms (end to end 0.792554 ms, enqueue 0.0692627 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791455 ms - Host latency: 0.791455 ms (end to end 0.791455 ms, enqueue 0.0687622 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792139 ms - Host latency: 0.792139 ms (end to end 0.792139 ms, enqueue 0.0668213 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791699 ms - Host latency: 0.791699 ms (end to end 0.791699 ms, enqueue 0.0684814 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792139 ms - Host latency: 0.792139 ms (end to end 0.792139 ms, enqueue 0.0671265 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791675 ms - Host latency: 0.791675 ms (end to end 0.791675 ms, enqueue 0.0684448 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792419 ms - Host latency: 0.792419 ms (end to end 0.792419 ms, enqueue 0.0695679 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792004 ms - Host latency: 0.792004 ms (end to end 0.792004 ms, enqueue 0.0673828 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791309 ms - Host latency: 0.791309 ms (end to end 0.791309 ms, enqueue 0.0685547 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793176 ms - Host latency: 0.793176 ms (end to end 0.793176 ms, enqueue 0.0683716 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792053 ms - Host latency: 0.792053 ms (end to end 0.792053 ms, enqueue 0.0724609 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791492 ms - Host latency: 0.791492 ms (end to end 0.791492 ms, enqueue 0.0721924 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792554 ms - Host latency: 0.792554 ms (end to end 0.792554 ms, enqueue 0.0678711 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792773 ms - Host latency: 0.792773 ms (end to end 0.792773 ms, enqueue 0.0689453 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792957 ms - Host latency: 0.792957 ms (end to end 0.792957 ms, enqueue 0.0674072 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791907 ms - Host latency: 0.791907 ms (end to end 0.791907 ms, enqueue 0.0687622 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792749 ms - Host latency: 0.792749 ms (end to end 0.792749 ms, enqueue 0.0675171 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793835 ms - Host latency: 0.793835 ms (end to end 0.793835 ms, enqueue 0.0689575 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791541 ms - Host latency: 0.791541 ms (end to end 0.791541 ms, enqueue 0.0716797 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793518 ms - Host latency: 0.793518 ms (end to end 0.793518 ms, enqueue 0.0673096 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792224 ms - Host latency: 0.792224 ms (end to end 0.792224 ms, enqueue 0.0688355 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792444 ms - Host latency: 0.792444 ms (end to end 0.792444 ms, enqueue 0.0669189 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791724 ms - Host latency: 0.791724 ms (end to end 0.791724 ms, enqueue 0.070459 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791919 ms - Host latency: 0.791919 ms (end to end 0.791919 ms, enqueue 0.0704224 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792017 ms - Host latency: 0.792017 ms (end to end 0.792017 ms, enqueue 0.068689 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792749 ms - Host latency: 0.792749 ms (end to end 0.792749 ms, enqueue 0.0677856 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792578 ms - Host latency: 0.792578 ms (end to end 0.792578 ms, enqueue 0.0683716 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792456 ms - Host latency: 0.792456 ms (end to end 0.792456 ms, enqueue 0.0666626 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792175 ms - Host latency: 0.792175 ms (end to end 0.792175 ms, enqueue 0.0708374 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793274 ms - Host latency: 0.793274 ms (end to end 0.793274 ms, enqueue 0.0679932 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791797 ms - Host latency: 0.791797 ms (end to end 0.791797 ms, enqueue 0.0681885 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792163 ms - Host latency: 0.792163 ms (end to end 0.792163 ms, enqueue 0.0667969 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792981 ms - Host latency: 0.792981 ms (end to end 0.792981 ms, enqueue 0.0682983 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791333 ms - Host latency: 0.791333 ms (end to end 0.791333 ms, enqueue 0.0707886 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792639 ms - Host latency: 0.792639 ms (end to end 0.792639 ms, enqueue 0.0709717 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792261 ms - Host latency: 0.792261 ms (end to end 0.792261 ms, enqueue 0.0672119 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792041 ms - Host latency: 0.792041 ms (end to end 0.792041 ms, enqueue 0.0680908 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79353 ms - Host latency: 0.79353 ms (end to end 0.79353 ms, enqueue 0.067395 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792358 ms - Host latency: 0.792358 ms (end to end 0.792358 ms, enqueue 0.0691895 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792383 ms - Host latency: 0.792383 ms (end to end 0.792383 ms, enqueue 0.0676636 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791553 ms - Host latency: 0.791553 ms (end to end 0.791553 ms, enqueue 0.0677979 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792529 ms - Host latency: 0.792529 ms (end to end 0.792529 ms, enqueue 0.0680298 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791504 ms - Host latency: 0.791504 ms (end to end 0.791504 ms, enqueue 0.0670166 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792456 ms - Host latency: 0.792456 ms (end to end 0.792456 ms, enqueue 0.0692139 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791333 ms - Host latency: 0.791333 ms (end to end 0.791333 ms, enqueue 0.0689941 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792188 ms - Host latency: 0.792188 ms (end to end 0.792188 ms, enqueue 0.0707153 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792236 ms - Host latency: 0.792236 ms (end to end 0.792236 ms, enqueue 0.0717041 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793005 ms - Host latency: 0.793005 ms (end to end 0.793005 ms, enqueue 0.0671387 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791357 ms - Host latency: 0.791357 ms (end to end 0.791357 ms, enqueue 0.0687622 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792444 ms - Host latency: 0.792444 ms (end to end 0.792444 ms, enqueue 0.0671265 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792969 ms - Host latency: 0.792969 ms (end to end 0.792969 ms, enqueue 0.0686523 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792065 ms - Host latency: 0.792065 ms (end to end 0.792065 ms, enqueue 0.0669434 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791724 ms - Host latency: 0.791724 ms (end to end 0.791724 ms, enqueue 0.0687744 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793066 ms - Host latency: 0.793066 ms (end to end 0.793066 ms, enqueue 0.0678955 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791919 ms - Host latency: 0.791919 ms (end to end 0.791919 ms, enqueue 0.0674805 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792139 ms - Host latency: 0.792139 ms (end to end 0.792139 ms, enqueue 0.0680908 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793604 ms - Host latency: 0.793604 ms (end to end 0.793604 ms, enqueue 0.0673096 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79248 ms - Host latency: 0.79248 ms (end to end 0.79248 ms, enqueue 0.0725586 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791919 ms - Host latency: 0.791919 ms (end to end 0.791919 ms, enqueue 0.0721436 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79209 ms - Host latency: 0.79209 ms (end to end 0.79209 ms, enqueue 0.0671631 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792603 ms - Host latency: 0.792603 ms (end to end 0.792603 ms, enqueue 0.0683594 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792578 ms - Host latency: 0.792578 ms (end to end 0.792578 ms, enqueue 0.0668701 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792529 ms - Host latency: 0.792529 ms (end to end 0.792529 ms, enqueue 0.068457 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791699 ms - Host latency: 0.791699 ms (end to end 0.791699 ms, enqueue 0.06875 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.790381 ms - Host latency: 0.790381 ms (end to end 0.790381 ms, enqueue 0.0681152 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793042 ms - Host latency: 0.793042 ms (end to end 0.793042 ms, enqueue 0.0690918 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791382 ms - Host latency: 0.791382 ms (end to end 0.791382 ms, enqueue 0.0696289 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792334 ms - Host latency: 0.792334 ms (end to end 0.792334 ms, enqueue 0.0679199 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792725 ms - Host latency: 0.792725 ms (end to end 0.792725 ms, enqueue 0.069751 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792505 ms - Host latency: 0.792505 ms (end to end 0.792505 ms, enqueue 0.0696289 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791724 ms - Host latency: 0.791724 ms (end to end 0.791724 ms, enqueue 0.0685791 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792456 ms - Host latency: 0.792456 ms (end to end 0.792456 ms, enqueue 0.0708984 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79375 ms - Host latency: 0.79375 ms (end to end 0.79375 ms, enqueue 0.0682861 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792822 ms - Host latency: 0.792822 ms (end to end 0.792822 ms, enqueue 0.0667969 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792969 ms - Host latency: 0.792969 ms (end to end 0.792969 ms, enqueue 0.0680908 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792627 ms - Host latency: 0.792627 ms (end to end 0.792627 ms, enqueue 0.0677246 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792261 ms - Host latency: 0.792261 ms (end to end 0.792261 ms, enqueue 0.0693604 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791699 ms - Host latency: 0.791699 ms (end to end 0.791699 ms, enqueue 0.0668457 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792651 ms - Host latency: 0.792651 ms (end to end 0.792651 ms, enqueue 0.0681641 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792358 ms - Host latency: 0.792358 ms (end to end 0.792358 ms, enqueue 0.0682129 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79187 ms - Host latency: 0.79187 ms (end to end 0.79187 ms, enqueue 0.0841064 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791797 ms - Host latency: 0.791797 ms (end to end 0.791797 ms, enqueue 0.0678467 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792432 ms - Host latency: 0.792432 ms (end to end 0.792432 ms, enqueue 0.0679199 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792163 ms - Host latency: 0.792163 ms (end to end 0.792163 ms, enqueue 0.0672363 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791675 ms - Host latency: 0.791675 ms (end to end 0.791675 ms, enqueue 0.068457 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.794067 ms - Host latency: 0.794067 ms (end to end 0.794067 ms, enqueue 0.0681152 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79209 ms - Host latency: 0.79209 ms (end to end 0.79209 ms, enqueue 0.069165 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792407 ms - Host latency: 0.792407 ms (end to end 0.792407 ms, enqueue 0.0672363 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792334 ms - Host latency: 0.792334 ms (end to end 0.792334 ms, enqueue 0.0681396 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791943 ms - Host latency: 0.791943 ms (end to end 0.791943 ms, enqueue 0.0674316 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793286 ms - Host latency: 0.793286 ms (end to end 0.793286 ms, enqueue 0.0685303 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793335 ms - Host latency: 0.793335 ms (end to end 0.793335 ms, enqueue 0.0686523 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791968 ms - Host latency: 0.791968 ms (end to end 0.791968 ms, enqueue 0.0735596 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79314 ms - Host latency: 0.79314 ms (end to end 0.79314 ms, enqueue 0.0776367 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79314 ms - Host latency: 0.79314 ms (end to end 0.79314 ms, enqueue 0.0681396 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793335 ms - Host latency: 0.793335 ms (end to end 0.793335 ms, enqueue 0.0678467 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793115 ms - Host latency: 0.793115 ms (end to end 0.793115 ms, enqueue 0.0669434 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792725 ms - Host latency: 0.792725 ms (end to end 0.792725 ms, enqueue 0.0688965 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791089 ms - Host latency: 0.791089 ms (end to end 0.791089 ms, enqueue 0.0673828 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792114 ms - Host latency: 0.792114 ms (end to end 0.792114 ms, enqueue 0.0677002 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791724 ms - Host latency: 0.791724 ms (end to end 0.791724 ms, enqueue 0.0679443 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792627 ms - Host latency: 0.792627 ms (end to end 0.792627 ms, enqueue 0.0662598 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791187 ms - Host latency: 0.791187 ms (end to end 0.791187 ms, enqueue 0.06875 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79231 ms - Host latency: 0.79231 ms (end to end 0.79231 ms, enqueue 0.0668457 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79165 ms - Host latency: 0.79165 ms (end to end 0.79165 ms, enqueue 0.0709229 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792603 ms - Host latency: 0.792603 ms (end to end 0.792603 ms, enqueue 0.0696289 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792383 ms - Host latency: 0.792383 ms (end to end 0.792383 ms, enqueue 0.0668945 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792407 ms - Host latency: 0.792407 ms (end to end 0.792407 ms, enqueue 0.0731445 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793164 ms - Host latency: 0.793164 ms (end to end 0.793164 ms, enqueue 0.0676025 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792236 ms - Host latency: 0.792236 ms (end to end 0.792236 ms, enqueue 0.0684082 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791553 ms - Host latency: 0.791553 ms (end to end 0.791553 ms, enqueue 0.0662598 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.790747 ms - Host latency: 0.790747 ms (end to end 0.790747 ms, enqueue 0.0682373 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792212 ms - Host latency: 0.792212 ms (end to end 0.792212 ms, enqueue 0.0665527 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792163 ms - Host latency: 0.792163 ms (end to end 0.792163 ms, enqueue 0.0685059 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791528 ms - Host latency: 0.791528 ms (end to end 0.791528 ms, enqueue 0.0676514 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791748 ms - Host latency: 0.791748 ms (end to end 0.791748 ms, enqueue 0.0713623 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791455 ms - Host latency: 0.791455 ms (end to end 0.791455 ms, enqueue 0.0681641 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791309 ms - Host latency: 0.791309 ms (end to end 0.791309 ms, enqueue 0.0665039 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792456 ms - Host latency: 0.792456 ms (end to end 0.792456 ms, enqueue 0.070166 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792334 ms - Host latency: 0.792334 ms (end to end 0.792334 ms, enqueue 0.0682617 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791699 ms - Host latency: 0.791699 ms (end to end 0.791699 ms, enqueue 0.0685547 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793164 ms - Host latency: 0.793164 ms (end to end 0.793164 ms, enqueue 0.0668457 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792529 ms - Host latency: 0.792529 ms (end to end 0.792529 ms, enqueue 0.0692871 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792456 ms - Host latency: 0.792456 ms (end to end 0.792456 ms, enqueue 0.0671875 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791455 ms - Host latency: 0.791455 ms (end to end 0.791455 ms, enqueue 0.0670898 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792456 ms - Host latency: 0.792456 ms (end to end 0.792456 ms, enqueue 0.0683105 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791406 ms - Host latency: 0.791406 ms (end to end 0.791406 ms, enqueue 0.0700928 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792383 ms - Host latency: 0.792383 ms (end to end 0.792383 ms, enqueue 0.0754395 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793701 ms - Host latency: 0.793701 ms (end to end 0.793701 ms, enqueue 0.0667236 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791699 ms - Host latency: 0.791699 ms (end to end 0.791699 ms, enqueue 0.0699707 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792139 ms - Host latency: 0.792139 ms (end to end 0.792139 ms, enqueue 0.0679199 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792895 ms - Host latency: 0.792895 ms (end to end 0.792895 ms, enqueue 0.0670898 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791333 ms - Host latency: 0.791333 ms (end to end 0.791333 ms, enqueue 0.069043 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791919 ms - Host latency: 0.791919 ms (end to end 0.791919 ms, enqueue 0.0678955 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792114 ms - Host latency: 0.792114 ms (end to end 0.792114 ms, enqueue 0.0670654 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79126 ms - Host latency: 0.79126 ms (end to end 0.79126 ms, enqueue 0.0681152 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792725 ms - Host latency: 0.792725 ms (end to end 0.792725 ms, enqueue 0.0673584 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792114 ms - Host latency: 0.792114 ms (end to end 0.792114 ms, enqueue 0.0695557 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791943 ms - Host latency: 0.791943 ms (end to end 0.791943 ms, enqueue 0.0739014 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792505 ms - Host latency: 0.792505 ms (end to end 0.792505 ms, enqueue 0.067334 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791455 ms - Host latency: 0.791455 ms (end to end 0.791455 ms, enqueue 0.0700195 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79231 ms - Host latency: 0.79231 ms (end to end 0.79231 ms, enqueue 0.0673096 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792627 ms - Host latency: 0.792627 ms (end to end 0.792627 ms, enqueue 0.0680664 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791943 ms - Host latency: 0.791943 ms (end to end 0.791943 ms, enqueue 0.0665772 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79248 ms - Host latency: 0.79248 ms (end to end 0.79248 ms, enqueue 0.0685791 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79248 ms - Host latency: 0.79248 ms (end to end 0.79248 ms, enqueue 0.0675293 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793677 ms - Host latency: 0.793677 ms (end to end 0.793677 ms, enqueue 0.0677979 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791992 ms - Host latency: 0.791992 ms (end to end 0.791992 ms, enqueue 0.066333 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791553 ms - Host latency: 0.791553 ms (end to end 0.791553 ms, enqueue 0.0678711 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791797 ms - Host latency: 0.791797 ms (end to end 0.791797 ms, enqueue 0.0708252 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792358 ms - Host latency: 0.792358 ms (end to end 0.792358 ms, enqueue 0.0706055 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791602 ms - Host latency: 0.791602 ms (end to end 0.791602 ms, enqueue 0.0678223 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792139 ms - Host latency: 0.792139 ms (end to end 0.792139 ms, enqueue 0.0679443 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792139 ms - Host latency: 0.792139 ms (end to end 0.792139 ms, enqueue 0.0678955 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791992 ms - Host latency: 0.791992 ms (end to end 0.791992 ms, enqueue 0.0661865 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792188 ms - Host latency: 0.792188 ms (end to end 0.792188 ms, enqueue 0.0697754 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792651 ms - Host latency: 0.792651 ms (end to end 0.792651 ms, enqueue 0.0676514 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791553 ms - Host latency: 0.791553 ms (end to end 0.791553 ms, enqueue 0.0681396 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793335 ms - Host latency: 0.793335 ms (end to end 0.793335 ms, enqueue 0.0715332 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791797 ms - Host latency: 0.791797 ms (end to end 0.791797 ms, enqueue 0.0673828 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792212 ms - Host latency: 0.792212 ms (end to end 0.792212 ms, enqueue 0.067749 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791162 ms - Host latency: 0.791162 ms (end to end 0.791162 ms, enqueue 0.0723389 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79248 ms - Host latency: 0.79248 ms (end to end 0.79248 ms, enqueue 0.0681152 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791284 ms - Host latency: 0.791284 ms (end to end 0.791284 ms, enqueue 0.0662598 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792065 ms - Host latency: 0.792065 ms (end to end 0.792065 ms, enqueue 0.0687256 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792749 ms - Host latency: 0.792749 ms (end to end 0.792749 ms, enqueue 0.0674805 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791504 ms - Host latency: 0.791504 ms (end to end 0.791504 ms, enqueue 0.0676514 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792725 ms - Host latency: 0.792725 ms (end to end 0.792725 ms, enqueue 0.0674561 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792163 ms - Host latency: 0.792163 ms (end to end 0.792163 ms, enqueue 0.0658203 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791943 ms - Host latency: 0.791943 ms (end to end 0.791943 ms, enqueue 0.0683594 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791528 ms - Host latency: 0.791528 ms (end to end 0.791528 ms, enqueue 0.0818359 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791211 ms - Host latency: 0.791211 ms (end to end 0.791211 ms, enqueue 0.116479 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.790649 ms - Host latency: 0.790649 ms (end to end 0.790649 ms, enqueue 0.123438 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791357 ms - Host latency: 0.791357 ms (end to end 0.791357 ms, enqueue 0.068457 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793042 ms - Host latency: 0.793042 ms (end to end 0.793042 ms, enqueue 0.0658203 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792969 ms - Host latency: 0.792969 ms (end to end 0.792969 ms, enqueue 0.0659668 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791479 ms - Host latency: 0.791479 ms (end to end 0.791479 ms, enqueue 0.0675293 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79248 ms - Host latency: 0.79248 ms (end to end 0.79248 ms, enqueue 0.0685059 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79187 ms - Host latency: 0.79187 ms (end to end 0.79187 ms, enqueue 0.0672363 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.790845 ms - Host latency: 0.790845 ms (end to end 0.790845 ms, enqueue 0.0671875 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.79248 ms - Host latency: 0.79248 ms (end to end 0.79248 ms, enqueue 0.067334 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791821 ms - Host latency: 0.791821 ms (end to end 0.791821 ms, enqueue 0.0686035 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792114 ms - Host latency: 0.792114 ms (end to end 0.792114 ms, enqueue 0.066333 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791699 ms - Host latency: 0.791699 ms (end to end 0.791699 ms, enqueue 0.072998 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791602 ms - Host latency: 0.791602 ms (end to end 0.791602 ms, enqueue 0.0683105 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791626 ms - Host latency: 0.791626 ms (end to end 0.791626 ms, enqueue 0.0658936 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791895 ms - Host latency: 0.791895 ms (end to end 0.791895 ms, enqueue 0.0668945 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791919 ms - Host latency: 0.791919 ms (end to end 0.791919 ms, enqueue 0.103052 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791724 ms - Host latency: 0.791724 ms (end to end 0.791724 ms, enqueue 0.0880371 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.792651 ms - Host latency: 0.792651 ms (end to end 0.792651 ms, enqueue 0.0657471 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.793188 ms - Host latency: 0.793188 ms (end to end 0.793188 ms, enqueue 0.0660645 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791968 ms - Host latency: 0.791968 ms (end to end 0.791968 ms, enqueue 0.0671631 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791895 ms - Host latency: 0.791895 ms (end to end 0.791895 ms, enqueue 0.0719238 ms) [10/04/2021-21:26:42] [I] Average on 10 runs - GPU latency: 0.791821 ms - Host latency: 0.791821 ms (end to end 0.791821 ms, enqueue 0.0664551 ms) [10/04/2021-21:26:42] [I] [10/04/2021-21:26:42] [I] === Performance summary === [10/04/2021-21:26:42] [I] Throughput: 1261.04 qps [10/04/2021-21:26:42] [I] Latency: min = 0.786377 ms, max = 0.798889 ms, mean = 0.79218 ms, median = 0.79248 ms, percentile(99%) = 0.796661 ms [10/04/2021-21:26:42] [I] End-to-End Host Latency: min = 0.786377 ms, max = 0.798889 ms, mean = 0.79218 ms, median = 0.79248 ms, percentile(99%) = 0.796661 ms [10/04/2021-21:26:42] [I] Enqueue Time: min = 0.0463867 ms, max = 0.272339 ms, mean = 0.0696422 ms, median = 0.0671387 ms, percentile(99%) = 0.113037 ms [10/04/2021-21:26:42] [I] H2D Latency: min = 0 ms, max = 0 ms, mean = 0 ms, median = 0 ms, percentile(99%) = 0 ms [10/04/2021-21:26:42] [I] GPU Compute Time: min = 0.786377 ms, max = 0.798889 ms, mean = 0.79218 ms, median = 0.79248 ms, percentile(99%) = 0.796661 ms [10/04/2021-21:26:42] [I] D2H Latency: min = 0 ms, max = 0 ms, mean = 0 ms, median = 0 ms, percentile(99%) = 0 ms [10/04/2021-21:26:42] [I] Total Host Walltime: 3.00149 s [10/04/2021-21:26:42] [I] Total GPU Compute Time: 2.9984 s [10/04/2021-21:26:42] [I] Explanations of the performance metrics are printed in the verbose logs. [10/04/2021-21:26:42] [V] [10/04/2021-21:26:42] [V] === Explanations of the performance metrics === [10/04/2021-21:26:42] [V] Total Host Walltime: the host walltime from when the first query (after warmups) is enqueued to when the last query is completed. [10/04/2021-21:26:42] [V] GPU Compute Time: the GPU latency to execute the kernels for a query. [10/04/2021-21:26:42] [V] Total GPU Compute Time: the summation of the GPU Compute Time of all the queries. If this is significantly shorter than Total Host Walltime, the GPU may be under-utilized because of host-side overheads or data transfers. [10/04/2021-21:26:42] [V] Throughput: the observed throughput computed by dividing the number of queries by the Total Host Walltime. If this is significantly lower than the reciprocal of GPU Compute Time, the GPU may be under-utilized because of host-side overheads or data transfers. [10/04/2021-21:26:42] [V] Enqueue Time: the host latency to enqueue a query. If this is longer than GPU Compute Time, the GPU may be under-utilized. [10/04/2021-21:26:42] [V] H2D Latency: the latency for host-to-device data transfers for input tensors of a single query. [10/04/2021-21:26:42] [V] D2H Latency: the latency for device-to-host data transfers for output tensors of a single query. [10/04/2021-21:26:42] [V] Latency: the summation of H2D Latency, GPU Compute Time, and D2H Latency. This is the latency to infer a single query. [10/04/2021-21:26:42] [V] End-to-End Host Latency: the duration from when the H2D of a query is called to when the D2H of the same query is completed, which includes the latency to wait for the completion of the previous query. This is the latency of a query if multiple queries are enqueued consecutively. [10/04/2021-21:26:42] [I] &&&& PASSED TensorRT.trtexec [TensorRT v8001] # /usr/src/tensorrt/bin/trtexec --onnx=resnet.onnx --useCudaGraph --threads --noDataTransfers --verbose --int8 [10/04/2021-21:26:42] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1354, GPU 3626 (MiB)