&&&& RUNNING TensorRT.trtexec [TensorRT v8003] # C:\Program Files\NVIDIA GPU Computing Toolkit\TensorRT-8.0.3.4\bin\trtexec.exe --onnx=resnet.onnx --useCudaGraph --threads --noDataTransfers --verbose --best [10/04/2021-21:34:35] [I] === Model Options === [10/04/2021-21:34:35] [I] Format: ONNX [10/04/2021-21:34:35] [I] Model: resnet.onnx [10/04/2021-21:34:35] [I] Output: [10/04/2021-21:34:35] [I] === Build Options === [10/04/2021-21:34:35] [I] Max batch: explicit [10/04/2021-21:34:35] [I] Workspace: 16 MiB [10/04/2021-21:34:35] [I] minTiming: 1 [10/04/2021-21:34:35] [I] avgTiming: 8 [10/04/2021-21:34:35] [I] Precision: FP32+FP16+INT8 [10/04/2021-21:34:35] [I] Calibration: Dynamic [10/04/2021-21:34:35] [I] Refit: Disabled [10/04/2021-21:34:35] [I] Sparsity: Disabled [10/04/2021-21:34:35] [I] Safe mode: Disabled [10/04/2021-21:34:35] [I] Restricted mode: Disabled [10/04/2021-21:34:35] [I] Save engine: [10/04/2021-21:34:35] [I] Load engine: [10/04/2021-21:34:35] [I] NVTX verbosity: 0 [10/04/2021-21:34:35] [I] Tactic sources: Using default tactic sources [10/04/2021-21:34:35] [I] timingCacheMode: local [10/04/2021-21:34:35] [I] timingCacheFile: [10/04/2021-21:34:35] [I] Input(s)s format: fp32:CHW [10/04/2021-21:34:35] [I] Output(s)s format: fp32:CHW [10/04/2021-21:34:35] [I] Input build shapes: model [10/04/2021-21:34:35] [I] Input calibration shapes: model [10/04/2021-21:34:35] [I] === System Options === [10/04/2021-21:34:35] [I] Device: 0 [10/04/2021-21:34:35] [I] DLACore: [10/04/2021-21:34:35] [I] Plugins: [10/04/2021-21:34:35] [I] === Inference Options === [10/04/2021-21:34:35] [I] Batch: Explicit [10/04/2021-21:34:35] [I] Input inference shapes: model [10/04/2021-21:34:35] [I] Iterations: 10 [10/04/2021-21:34:35] [I] Duration: 3s (+ 200ms warm up) [10/04/2021-21:34:35] [I] Sleep time: 0ms [10/04/2021-21:34:35] [I] Streams: 1 [10/04/2021-21:34:35] [I] ExposeDMA: Disabled [10/04/2021-21:34:35] [I] Data transfers: Disabled [10/04/2021-21:34:35] [I] Spin-wait: Disabled [10/04/2021-21:34:35] [I] Multithreading: Enabled [10/04/2021-21:34:35] [I] CUDA Graph: Enabled [10/04/2021-21:34:35] [I] Separate profiling: Disabled [10/04/2021-21:34:35] [I] Time Deserialize: Disabled [10/04/2021-21:34:35] [I] Time Refit: Disabled [10/04/2021-21:34:35] [I] Skip inference: Disabled [10/04/2021-21:34:35] [I] Inputs: [10/04/2021-21:34:35] [I] === Reporting Options === [10/04/2021-21:34:35] [I] Verbose: Enabled [10/04/2021-21:34:35] [I] Averages: 10 inferences [10/04/2021-21:34:35] [I] Percentile: 99 [10/04/2021-21:34:35] [I] Dump refittable layers:Disabled [10/04/2021-21:34:35] [I] Dump output: Disabled [10/04/2021-21:34:35] [I] Profile: Disabled [10/04/2021-21:34:35] [I] Export timing to JSON file: [10/04/2021-21:34:35] [I] Export output to JSON file: [10/04/2021-21:34:35] [I] Export profile to JSON file: [10/04/2021-21:34:35] [I] [10/04/2021-21:34:35] [I] === Device Information === [10/04/2021-21:34:35] [I] Selected Device: GeForce GTX 1060 6GB [10/04/2021-21:34:35] [I] Compute Capability: 6.1 [10/04/2021-21:34:35] [I] SMs: 10 [10/04/2021-21:34:35] [I] Compute Clock Rate: 1.7085 GHz [10/04/2021-21:34:35] [I] Device Global Memory: 6144 MiB [10/04/2021-21:34:35] [I] Shared Memory per SM: 96 KiB [10/04/2021-21:34:35] [I] Memory Bus Width: 192 bits (ECC disabled) [10/04/2021-21:34:35] [I] Memory Clock Rate: 4.004 GHz [10/04/2021-21:34:35] [I] [10/04/2021-21:34:35] [I] TensorRT version: 8003 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::NMS_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::Reorg_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::Region_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::Clip_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::LReLU_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::Normalize_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::ScatterND version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::RPROI_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::CropAndResize version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::Proposal version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::Split version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1 [10/04/2021-21:34:35] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1 [10/04/2021-21:34:36] [I] [TRT] [MemUsageChange] Init CUDA: CPU +257, GPU +0, now: CPU 9203, GPU 977 (MiB) [10/04/2021-21:34:36] [I] Start parsing network model [10/04/2021-21:34:36] [I] [TRT] ---------------------------------------------------------------- [10/04/2021-21:34:36] [I] [TRT] Input filename: resnet.onnx [10/04/2021-21:34:36] [I] [TRT] ONNX IR version: 0.0.7 [10/04/2021-21:34:36] [I] [TRT] Opset version: 13 [10/04/2021-21:34:36] [I] [TRT] Producer name: tf2onnx [10/04/2021-21:34:36] [I] [TRT] Producer version: 1.10.0 [10/04/2021-21:34:36] [I] [TRT] Domain: [10/04/2021-21:34:36] [I] [TRT] Model version: 0 [10/04/2021-21:34:36] [I] [TRT] Doc string: [10/04/2021-21:34:36] [I] [TRT] ---------------------------------------------------------------- [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::GridAnchor_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::GridAnchorRect_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::NMS_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::Reorg_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::Region_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::Clip_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::LReLU_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::PriorBox_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::Normalize_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::ScatterND version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::RPROI_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::BatchedNMS_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::BatchedNMSDynamic_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::FlattenConcat_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::CropAndResize version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::DetectionLayer_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::EfficientNMS_ONNX_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::EfficientNMS_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::Proposal version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::ProposalLayer_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::PyramidROIAlign_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::ResizeNearest_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::Split version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::SpecialSlice_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 1 [10/04/2021-21:34:36] [V] [TRT] Adding network input: input_1 with dtype: float32, dimensions: (-1, 3, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Registering tensor: input_1 for ONNX tensor: input_1 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__998 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__994 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__990 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__986 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__978 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__974 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__970 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__966 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__958 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__954 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__950 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__942 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__938 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__934 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__930 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__926 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__918 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__914 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__910 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__902 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__898 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__894 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__890 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__886 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__878 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__874 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__870 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__866 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__858 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__854 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__850 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__846 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__838 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__834 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__830 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__826 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__822 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__818 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__810 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__806 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__802 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__794 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__790 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__786 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__782 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__774 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__770 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__766 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__762 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__758 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__750 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__746 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__742 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__738 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__730 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__726 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__722 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__714 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__710 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__706 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__702 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__694 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__690 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__686 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__682 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__674 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__670 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__666 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__662 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__654 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__650 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__646 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__642 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__634 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__630 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__626 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1186 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1182 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1178 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1174 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1166 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1162 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1158 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1154 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1146 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1142 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1138 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1130 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1126 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1122 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1118 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1114 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1106 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1102 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1098 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1094 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1086 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1082 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1078 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1074 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1066 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1062 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1058 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1054 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1046 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1042 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1038 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1034 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1026 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1022 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1018 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1010 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1006 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: quant_scale__1002 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: const_axes__2019 [10/04/2021-21:34:36] [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd/ReadVariableOp__111 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd/ReadVariableOp__99 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd/ReadVariableOp__89 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd/ReadVariableOp__77 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd/ReadVariableOp__617 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd/ReadVariableOp__607 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd/ReadVariableOp__595 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd/ReadVariableOp__585 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd/ReadVariableOp__573 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd/ReadVariableOp__563 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd/ReadVariableOp__551 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd/ReadVariableOp__67 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd/ReadVariableOp__541 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd/ReadVariableOp__529 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd/ReadVariableOp__519 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd/ReadVariableOp__507 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd/ReadVariableOp__497 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd/ReadVariableOp__485 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd/ReadVariableOp__475 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd/ReadVariableOp__463 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd/ReadVariableOp__453 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd/ReadVariableOp__423 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd/ReadVariableOp__55 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd/ReadVariableOp__441 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd/ReadVariableOp__431 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd/ReadVariableOp__413 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd/ReadVariableOp__403 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd/ReadVariableOp__391 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd/ReadVariableOp__381 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd/ReadVariableOp__369 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd/ReadVariableOp__359 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd/ReadVariableOp__347 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd/ReadVariableOp__337 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd/ReadVariableOp__45 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd/ReadVariableOp__325 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd/ReadVariableOp__315 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd/ReadVariableOp__303 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd/ReadVariableOp__293 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd/ReadVariableOp__281 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd/ReadVariableOp__271 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd/ReadVariableOp__259 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd/ReadVariableOp__249 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd/ReadVariableOp__219 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd/ReadVariableOp__237 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd/ReadVariableOp__33 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd/ReadVariableOp__227 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd/ReadVariableOp__209 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd/ReadVariableOp__199 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd/ReadVariableOp__187 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd/ReadVariableOp__177 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd/ReadVariableOp__165 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd/ReadVariableOp__155 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd/ReadVariableOp__143 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd/ReadVariableOp__133 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd/ReadVariableOp__121 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd/ReadVariableOp__23 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d/transpose__8 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/quant_conv2d/BiasAdd/ReadVariableOp__11 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/dense/MatMul/ReadVariableOp__622 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp__112 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp_1__113 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp__114 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp_1__115 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp__100 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp_1__101 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp__102 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp_1__103 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp__90 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp_1__91 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp__92 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp_1__93 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp__78 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp_1__79 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp__80 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp_1__81 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp__618 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp_1__619 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp__620 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp_1__621 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp__608 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp_1__609 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp__610 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp_1__611 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp__596 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp_1__597 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp__598 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp_1__599 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp__586 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp_1__587 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp__588 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp_1__589 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp__574 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp_1__575 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp__576 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp_1__577 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp__68 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp_1__69 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp__70 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp_1__71 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp__564 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp_1__565 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp__566 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp_1__567 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp__552 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp_1__553 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp__554 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp_1__555 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp__542 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp_1__543 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp__544 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp_1__545 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp__530 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp_1__531 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp__532 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp_1__533 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp__520 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp_1__521 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp__522 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp_1__523 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp__508 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp_1__509 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp__510 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp_1__511 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp__498 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp_1__499 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp__500 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp_1__501 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp__486 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp_1__487 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp__488 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp_1__489 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp__476 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp_1__477 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp__478 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp_1__479 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp__464 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp_1__465 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp__466 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp_1__467 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp__56 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp_1__57 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp__58 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp_1__59 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp__454 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp_1__455 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp__456 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp_1__457 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp__442 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp_1__443 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp__444 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp_1__445 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp__432 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp_1__433 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp__434 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp_1__435 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp__414 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp_1__415 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp__416 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp_1__417 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp__404 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp_1__405 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp__406 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp_1__407 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp__392 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp_1__393 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp__394 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp_1__395 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp__382 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp_1__383 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp__384 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp_1__385 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp__370 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp_1__371 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp__372 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp_1__373 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp__360 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp_1__361 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp__362 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp_1__363 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp__348 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp_1__349 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp__350 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp_1__351 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp__46 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp_1__47 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp__48 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp_1__49 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp__338 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp_1__339 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp__340 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp_1__341 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp__326 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp_1__327 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp__328 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp_1__329 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp__316 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp_1__317 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp__318 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp_1__319 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp__304 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp_1__305 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp__306 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp_1__307 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp__294 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp_1__295 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp__296 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp_1__297 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp__282 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp_1__283 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp__284 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp_1__285 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp__272 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp_1__273 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp__274 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp_1__275 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp__260 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp_1__261 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp__262 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp_1__263 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp__250 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp_1__251 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp__252 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp_1__253 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp__238 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp_1__239 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp__240 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp_1__241 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp__34 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp_1__35 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp__36 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp_1__37 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp__228 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp_1__229 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp__230 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp_1__231 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp__210 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp_1__211 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp__212 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp_1__213 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp__200 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp_1__201 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp__202 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp_1__203 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp__188 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp_1__189 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp__190 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp_1__191 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp__178 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp_1__179 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp__180 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp_1__181 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp__166 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp_1__167 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp__168 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp_1__169 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp__156 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp_1__157 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp__158 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp_1__159 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp__144 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp_1__145 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp__146 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp_1__147 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp__134 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp_1__135 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp__136 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp_1__137 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp__122 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp_1__123 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp__124 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp_1__125 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp__24 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp_1__25 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp__26 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp_1__27 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization/ReadVariableOp__12 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization/ReadVariableOp_1__13 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp__14 [10/04/2021-21:34:36] [V] [TRT] Importing initializer: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp_1__15 [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__992 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__990 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__992 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__990 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__992:0 for ONNX tensor: QuantLinearNode__992:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__992 [QuantizeLinear] outputs: [QuantLinearNode__992:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__980 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__978 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__980 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__978 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__980:0 for ONNX tensor: QuantLinearNode__980:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__980 [QuantizeLinear] outputs: [QuantLinearNode__980:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__972 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__970 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__972 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__970 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__972:0 for ONNX tensor: QuantLinearNode__972:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__972 [QuantizeLinear] outputs: [QuantLinearNode__972:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__960 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__958 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__960 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__958 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__960:0 for ONNX tensor: QuantLinearNode__960:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__960 [QuantizeLinear] outputs: [QuantLinearNode__960:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__952 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__950 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__952 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__950 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__952:0 for ONNX tensor: QuantLinearNode__952:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__952 [QuantizeLinear] outputs: [QuantLinearNode__952:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__940 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__938 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__940 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__938 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__940:0 for ONNX tensor: QuantLinearNode__940:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__940 [QuantizeLinear] outputs: [QuantLinearNode__940:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__932 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__930 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__932 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__930 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__932:0 for ONNX tensor: QuantLinearNode__932:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__932 [QuantizeLinear] outputs: [QuantLinearNode__932:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__920 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__918 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__920 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__918 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__920:0 for ONNX tensor: QuantLinearNode__920:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__920 [QuantizeLinear] outputs: [QuantLinearNode__920:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__912 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__910 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__912 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__910 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__912:0 for ONNX tensor: QuantLinearNode__912:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__912 [QuantizeLinear] outputs: [QuantLinearNode__912:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__900 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__898 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__900 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__898 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__900:0 for ONNX tensor: QuantLinearNode__900:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__900 [QuantizeLinear] outputs: [QuantLinearNode__900:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__892 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__890 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__892 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__890 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__892:0 for ONNX tensor: QuantLinearNode__892:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__892 [QuantizeLinear] outputs: [QuantLinearNode__892:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__880 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__878 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__880 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__878 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__880:0 for ONNX tensor: QuantLinearNode__880:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__880 [QuantizeLinear] outputs: [QuantLinearNode__880:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__872 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__870 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__872 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__870 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__872:0 for ONNX tensor: QuantLinearNode__872:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__872 [QuantizeLinear] outputs: [QuantLinearNode__872:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__860 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__858 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__860 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__858 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__860:0 for ONNX tensor: QuantLinearNode__860:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__860 [QuantizeLinear] outputs: [QuantLinearNode__860:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__852 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__850 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__852 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__850 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__852:0 for ONNX tensor: QuantLinearNode__852:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__852 [QuantizeLinear] outputs: [QuantLinearNode__852:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__840 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__838 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__840 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__838 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__840:0 for ONNX tensor: QuantLinearNode__840:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__840 [QuantizeLinear] outputs: [QuantLinearNode__840:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__832 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__830 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__832 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 -> (32, 16, 3, 3)[FLOAT]], [quant_scale__830 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__832:0 for ONNX tensor: QuantLinearNode__832:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__832 [QuantizeLinear] outputs: [QuantLinearNode__832:0 -> (32, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__820 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__818 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__820 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 -> (32, 16, 1, 1)[FLOAT]], [quant_scale__818 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__820:0 for ONNX tensor: QuantLinearNode__820:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__820 [QuantizeLinear] outputs: [QuantLinearNode__820:0 -> (32, 16, 1, 1)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__812 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__810 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__812 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__810 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__812:0 for ONNX tensor: QuantLinearNode__812:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__812 [QuantizeLinear] outputs: [QuantLinearNode__812:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__804 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__802 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__804 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__802 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__804:0 for ONNX tensor: QuantLinearNode__804:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__804 [QuantizeLinear] outputs: [QuantLinearNode__804:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__792 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__790 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__792 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__790 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__792:0 for ONNX tensor: QuantLinearNode__792:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__792 [QuantizeLinear] outputs: [QuantLinearNode__792:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__784 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__782 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__784 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__782 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__784:0 for ONNX tensor: QuantLinearNode__784:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__784 [QuantizeLinear] outputs: [QuantLinearNode__784:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__772 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__770 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__772 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__770 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__772:0 for ONNX tensor: QuantLinearNode__772:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__772 [QuantizeLinear] outputs: [QuantLinearNode__772:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__764 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__762 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__764 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__762 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__764:0 for ONNX tensor: QuantLinearNode__764:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__764 [QuantizeLinear] outputs: [QuantLinearNode__764:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__752 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__750 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__752 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__750 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__752:0 for ONNX tensor: QuantLinearNode__752:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__752 [QuantizeLinear] outputs: [QuantLinearNode__752:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__744 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__742 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__744 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__742 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__744:0 for ONNX tensor: QuantLinearNode__744:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__744 [QuantizeLinear] outputs: [QuantLinearNode__744:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__732 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__730 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__732 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__730 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__732:0 for ONNX tensor: QuantLinearNode__732:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__732 [QuantizeLinear] outputs: [QuantLinearNode__732:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__724 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__722 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__724 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__722 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__724:0 for ONNX tensor: QuantLinearNode__724:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__724 [QuantizeLinear] outputs: [QuantLinearNode__724:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__712 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__710 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__712 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__710 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__712:0 for ONNX tensor: QuantLinearNode__712:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__712 [QuantizeLinear] outputs: [QuantLinearNode__712:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__704 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__702 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__704 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__702 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__704:0 for ONNX tensor: QuantLinearNode__704:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__704 [QuantizeLinear] outputs: [QuantLinearNode__704:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__692 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__690 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__692 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__690 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__692:0 for ONNX tensor: QuantLinearNode__692:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__692 [QuantizeLinear] outputs: [QuantLinearNode__692:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__684 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__682 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__684 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__682 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__684:0 for ONNX tensor: QuantLinearNode__684:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__684 [QuantizeLinear] outputs: [QuantLinearNode__684:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__672 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__670 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__672 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__670 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__672:0 for ONNX tensor: QuantLinearNode__672:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__672 [QuantizeLinear] outputs: [QuantLinearNode__672:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__664 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__662 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__664 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__662 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__664:0 for ONNX tensor: QuantLinearNode__664:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__664 [QuantizeLinear] outputs: [QuantLinearNode__664:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__652 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__650 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__652 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__650 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__652:0 for ONNX tensor: QuantLinearNode__652:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__652 [QuantizeLinear] outputs: [QuantLinearNode__652:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__644 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__642 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__644 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__642 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__644:0 for ONNX tensor: QuantLinearNode__644:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__644 [QuantizeLinear] outputs: [QuantLinearNode__644:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__632 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d/transpose__8 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__630 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__632 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d/transpose__8 -> (16, 3, 3, 3)[FLOAT]], [quant_scale__630 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d/transpose__8 for ONNX node: StatefulPartitionedCall/model/quant_conv2d/transpose__8 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__632:0 for ONNX tensor: QuantLinearNode__632:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__632 [QuantizeLinear] outputs: [QuantLinearNode__632:0 -> (16, 3, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__628 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: input_1 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__626 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__628 [QuantizeLinear] inputs: [input_1 -> (-1, 3, 32, 32)[FLOAT]], [quant_scale__626 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__628:0 for ONNX tensor: QuantLinearNode__628:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__628 [QuantizeLinear] outputs: [QuantLinearNode__628:0 -> (-1, 3, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1188 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1186 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1188 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1186 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1188:0 for ONNX tensor: QuantLinearNode__1188:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1188 [QuantizeLinear] outputs: [QuantLinearNode__1188:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1180 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1178 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1180 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1178 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1180:0 for ONNX tensor: QuantLinearNode__1180:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1180 [QuantizeLinear] outputs: [QuantLinearNode__1180:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1168 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1166 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1168 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1166 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1168:0 for ONNX tensor: QuantLinearNode__1168:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1168 [QuantizeLinear] outputs: [QuantLinearNode__1168:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1160 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1158 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1160 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1158 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1160:0 for ONNX tensor: QuantLinearNode__1160:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1160 [QuantizeLinear] outputs: [QuantLinearNode__1160:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1148 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1146 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1148 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1146 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1148:0 for ONNX tensor: QuantLinearNode__1148:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1148 [QuantizeLinear] outputs: [QuantLinearNode__1148:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1140 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1138 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1140 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1138 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1140:0 for ONNX tensor: QuantLinearNode__1140:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1140 [QuantizeLinear] outputs: [QuantLinearNode__1140:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1128 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1126 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1128 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1126 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1128:0 for ONNX tensor: QuantLinearNode__1128:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1128 [QuantizeLinear] outputs: [QuantLinearNode__1128:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1120 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1118 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1120 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1118 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1120:0 for ONNX tensor: QuantLinearNode__1120:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1120 [QuantizeLinear] outputs: [QuantLinearNode__1120:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1108 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1106 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1108 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1106 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1108:0 for ONNX tensor: QuantLinearNode__1108:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1108 [QuantizeLinear] outputs: [QuantLinearNode__1108:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1100 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1098 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1100 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1098 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1100:0 for ONNX tensor: QuantLinearNode__1100:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1100 [QuantizeLinear] outputs: [QuantLinearNode__1100:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1088 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1086 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1088 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1086 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1088:0 for ONNX tensor: QuantLinearNode__1088:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1088 [QuantizeLinear] outputs: [QuantLinearNode__1088:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1080 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1078 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1080 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1078 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1080:0 for ONNX tensor: QuantLinearNode__1080:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1080 [QuantizeLinear] outputs: [QuantLinearNode__1080:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1068 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1066 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1068 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1066 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1068:0 for ONNX tensor: QuantLinearNode__1068:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1068 [QuantizeLinear] outputs: [QuantLinearNode__1068:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1060 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1058 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1060 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1058 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1060:0 for ONNX tensor: QuantLinearNode__1060:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1060 [QuantizeLinear] outputs: [QuantLinearNode__1060:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1048 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1046 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1048 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1046 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1048:0 for ONNX tensor: QuantLinearNode__1048:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1048 [QuantizeLinear] outputs: [QuantLinearNode__1048:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1040 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1038 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1040 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1038 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1040:0 for ONNX tensor: QuantLinearNode__1040:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1040 [QuantizeLinear] outputs: [QuantLinearNode__1040:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1028 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1026 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1028 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1026 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1028:0 for ONNX tensor: QuantLinearNode__1028:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1028 [QuantizeLinear] outputs: [QuantLinearNode__1028:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1020 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1018 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1020 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 -> (64, 32, 3, 3)[FLOAT]], [quant_scale__1018 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1020:0 for ONNX tensor: QuantLinearNode__1020:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1020 [QuantizeLinear] outputs: [QuantLinearNode__1020:0 -> (64, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1008 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1006 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1008 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 -> (64, 32, 1, 1)[FLOAT]], [quant_scale__1006 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1008:0 for ONNX tensor: QuantLinearNode__1008:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1008 [QuantizeLinear] outputs: [QuantLinearNode__1008:0 -> (64, 32, 1, 1)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__1000 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__998 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1000 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__998 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 for ONNX node: StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__1000:0 for ONNX tensor: QuantLinearNode__1000:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__1000 [QuantizeLinear] outputs: [QuantLinearNode__1000:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__993 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__992:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__990 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__993 [DequantizeLinear] inputs: [QuantLinearNode__992:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__990 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__993 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__981 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__980:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__978 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__981 [DequantizeLinear] inputs: [QuantLinearNode__980:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__978 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__981 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__973 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__972:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__970 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__973 [DequantizeLinear] inputs: [QuantLinearNode__972:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__970 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__973 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__961 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__960:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__958 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__961 [DequantizeLinear] inputs: [QuantLinearNode__960:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__958 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__961 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__953 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__952:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__950 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__953 [DequantizeLinear] inputs: [QuantLinearNode__952:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__950 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__953 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__941 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__940:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__938 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__941 [DequantizeLinear] inputs: [QuantLinearNode__940:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__938 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__941 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__933 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__932:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__930 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__933 [DequantizeLinear] inputs: [QuantLinearNode__932:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__930 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__933 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__921 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__920:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__918 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__921 [DequantizeLinear] inputs: [QuantLinearNode__920:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__918 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__921 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__913 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__912:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__910 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__913 [DequantizeLinear] inputs: [QuantLinearNode__912:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__910 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__913 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__901 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__900:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__898 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__901 [DequantizeLinear] inputs: [QuantLinearNode__900:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__898 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__901 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__893 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__892:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__890 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__893 [DequantizeLinear] inputs: [QuantLinearNode__892:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__890 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__893 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__881 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__880:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__878 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__881 [DequantizeLinear] inputs: [QuantLinearNode__880:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__878 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__881 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__873 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__872:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__870 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__873 [DequantizeLinear] inputs: [QuantLinearNode__872:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__870 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__873 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__861 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__860:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__858 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__861 [DequantizeLinear] inputs: [QuantLinearNode__860:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__858 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__861 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__853 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__852:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__850 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__853 [DequantizeLinear] inputs: [QuantLinearNode__852:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__850 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__853 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__841 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__840:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__838 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__841 [DequantizeLinear] inputs: [QuantLinearNode__840:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__838 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__841 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__833 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__832:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__830 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__833 [DequantizeLinear] inputs: [QuantLinearNode__832:0 -> (32, 16, 3, 3)[FLOAT]], [quant_scale__830 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__833 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize_1:0 -> (32, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__821 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__820:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__818 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__821 [DequantizeLinear] inputs: [QuantLinearNode__820:0 -> (32, 16, 1, 1)[FLOAT]], [quant_scale__818 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__821 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize_1:0 -> (32, 16, 1, 1)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__813 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__812:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__810 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__813 [DequantizeLinear] inputs: [QuantLinearNode__812:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__810 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__813 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__805 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__804:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__802 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__805 [DequantizeLinear] inputs: [QuantLinearNode__804:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__802 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__805 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__793 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__792:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__790 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__793 [DequantizeLinear] inputs: [QuantLinearNode__792:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__790 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__793 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__785 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__784:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__782 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__785 [DequantizeLinear] inputs: [QuantLinearNode__784:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__782 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__785 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__773 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__772:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__770 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__773 [DequantizeLinear] inputs: [QuantLinearNode__772:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__770 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__773 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__765 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__764:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__762 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__765 [DequantizeLinear] inputs: [QuantLinearNode__764:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__762 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__765 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__753 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__752:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__750 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__753 [DequantizeLinear] inputs: [QuantLinearNode__752:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__750 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__753 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__745 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__744:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__742 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__745 [DequantizeLinear] inputs: [QuantLinearNode__744:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__742 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__745 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__733 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__732:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__730 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__733 [DequantizeLinear] inputs: [QuantLinearNode__732:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__730 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__733 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__725 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__724:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__722 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__725 [DequantizeLinear] inputs: [QuantLinearNode__724:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__722 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__725 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__713 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__712:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__710 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__713 [DequantizeLinear] inputs: [QuantLinearNode__712:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__710 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__713 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__705 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__704:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__702 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__705 [DequantizeLinear] inputs: [QuantLinearNode__704:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__702 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__705 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__693 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__692:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__690 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__693 [DequantizeLinear] inputs: [QuantLinearNode__692:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__690 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__693 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__685 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__684:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__682 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__685 [DequantizeLinear] inputs: [QuantLinearNode__684:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__682 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__685 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__673 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__672:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__670 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__673 [DequantizeLinear] inputs: [QuantLinearNode__672:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__670 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__673 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__665 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__664:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__662 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__665 [DequantizeLinear] inputs: [QuantLinearNode__664:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__662 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__665 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__653 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__652:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__650 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__653 [DequantizeLinear] inputs: [QuantLinearNode__652:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__650 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__653 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__645 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__644:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__642 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__645 [DequantizeLinear] inputs: [QuantLinearNode__644:0 -> (16, 16, 3, 3)[FLOAT]], [quant_scale__642 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__645 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__633 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__632:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__630 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__803 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__633 [DequantizeLinear] inputs: [QuantLinearNode__632:0 -> (16, 3, 3, 3)[FLOAT]], [quant_scale__630 -> (16)[FLOAT]], [zero_point__803 -> (16)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__633 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize_1:0 -> (16, 3, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__629 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__628:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__626 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__629 [DequantizeLinear] inputs: [QuantLinearNode__628:0 -> (-1, 3, 32, 32)[FLOAT]], [quant_scale__626 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__629 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize:0 -> (-1, 3, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d/BiasAdd/ReadVariableOp__11 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize:0 -> (-1, 3, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d/quantize_and_dequantize_1:0 -> (16, 3, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d/BiasAdd/ReadVariableOp__11 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 3, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization/ReadVariableOp__12 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization/ReadVariableOp_1__13 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp__14 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp_1__15 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization/ReadVariableOp__12 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization/ReadVariableOp_1__13 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp__14 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3/ReadVariableOp_1__15 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation/Relu for ONNX node: StatefulPartitionedCall/model/activation/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__640 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__634 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__640 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__634 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__640:0 for ONNX tensor: QuantLinearNode__640:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__640 [QuantizeLinear] outputs: [QuantLinearNode__640:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__641 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__640:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__634 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__641 [DequantizeLinear] inputs: [QuantLinearNode__640:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__634 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__641 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd/ReadVariableOp__23 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_1/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd/ReadVariableOp__23 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp__24 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp_1__25 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp__26 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp_1__27 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp__24 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_1/ReadVariableOp_1__25 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp__26 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3/ReadVariableOp_1__27 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_1/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_1/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_1/Relu for ONNX node: StatefulPartitionedCall/model/activation_1/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_1/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_1/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_1/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_1/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__648 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_1/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__646 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__648 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_1/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__646 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__648:0 for ONNX tensor: QuantLinearNode__648:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__648 [QuantizeLinear] outputs: [QuantLinearNode__648:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__649 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__648:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__646 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__649 [DequantizeLinear] inputs: [QuantLinearNode__648:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__646 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__649 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd/ReadVariableOp__33 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_2/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd/ReadVariableOp__33 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp__34 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp_1__35 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp__36 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp_1__37 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp__34 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_2/ReadVariableOp_1__35 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp__36 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3/ReadVariableOp_1__37 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__636 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__634 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__636 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__634 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__636:0 for ONNX tensor: QuantLinearNode__636:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__636 [QuantizeLinear] outputs: [QuantLinearNode__636:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__637 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__636:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__634 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__637 [DequantizeLinear] inputs: [QuantLinearNode__636:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__634 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__637 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add/add for ONNX node: StatefulPartitionedCall/model/add/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add/add:0 for ONNX tensor: StatefulPartitionedCall/model/add/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add/add [Add] outputs: [StatefulPartitionedCall/model/add/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_2/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_2/Relu [Relu] inputs: [StatefulPartitionedCall/model/add/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_2/Relu for ONNX node: StatefulPartitionedCall/model/activation_2/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_2/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_2/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_2/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_2/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__660 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_2/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__654 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__660 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_2/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__654 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__660:0 for ONNX tensor: QuantLinearNode__660:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__660 [QuantizeLinear] outputs: [QuantLinearNode__660:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__661 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__660:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__654 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__661 [DequantizeLinear] inputs: [QuantLinearNode__660:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__654 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__661 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd/ReadVariableOp__45 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_3/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd/ReadVariableOp__45 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp__46 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp_1__47 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp__48 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp_1__49 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp__46 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_3/ReadVariableOp_1__47 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp__48 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3/ReadVariableOp_1__49 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_3/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_3/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_3/Relu for ONNX node: StatefulPartitionedCall/model/activation_3/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_3/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_3/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_3/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_3/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__668 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_3/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__666 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__668 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_3/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__666 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__668:0 for ONNX tensor: QuantLinearNode__668:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__668 [QuantizeLinear] outputs: [QuantLinearNode__668:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__669 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__668:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__666 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__669 [DequantizeLinear] inputs: [QuantLinearNode__668:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__666 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__669 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd/ReadVariableOp__55 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_4/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd/ReadVariableOp__55 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp__56 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp_1__57 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp__58 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp_1__59 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp__56 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_4/ReadVariableOp_1__57 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp__58 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3/ReadVariableOp_1__59 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__656 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_2/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__654 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__656 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_2/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__654 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__656:0 for ONNX tensor: QuantLinearNode__656:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__656 [QuantizeLinear] outputs: [QuantLinearNode__656:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__657 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__656:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__654 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__657 [DequantizeLinear] inputs: [QuantLinearNode__656:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__654 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_1/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_1/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__657 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_1/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_1/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_1/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_1/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_1/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_1/add for ONNX node: StatefulPartitionedCall/model/add_1/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_1/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_1/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_1/add [Add] outputs: [StatefulPartitionedCall/model/add_1/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_4/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_1/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_4/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_1/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_4/Relu for ONNX node: StatefulPartitionedCall/model/activation_4/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_4/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_4/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_4/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__680 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_4/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__674 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__680 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_4/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__674 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__680:0 for ONNX tensor: QuantLinearNode__680:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__680 [QuantizeLinear] outputs: [QuantLinearNode__680:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__681 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__680:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__674 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__681 [DequantizeLinear] inputs: [QuantLinearNode__680:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__674 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__681 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd/ReadVariableOp__67 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_5/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd/ReadVariableOp__67 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp__68 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp_1__69 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp__70 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp_1__71 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp__68 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_5/ReadVariableOp_1__69 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp__70 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3/ReadVariableOp_1__71 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_5/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_5/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_5/Relu for ONNX node: StatefulPartitionedCall/model/activation_5/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_5/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_5/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_5/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_5/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__688 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_5/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__686 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__688 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_5/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__686 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__688:0 for ONNX tensor: QuantLinearNode__688:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__688 [QuantizeLinear] outputs: [QuantLinearNode__688:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__689 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__688:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__686 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__689 [DequantizeLinear] inputs: [QuantLinearNode__688:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__686 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__689 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd/ReadVariableOp__77 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_6/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd/ReadVariableOp__77 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp__78 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp_1__79 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp__80 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp_1__81 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp__78 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_6/ReadVariableOp_1__79 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp__80 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3/ReadVariableOp_1__81 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__676 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_4/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__674 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__676 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_4/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__674 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__676:0 for ONNX tensor: QuantLinearNode__676:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__676 [QuantizeLinear] outputs: [QuantLinearNode__676:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__677 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__676:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__674 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__677 [DequantizeLinear] inputs: [QuantLinearNode__676:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__674 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_2/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_2/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__677 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_2/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_2/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_2/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_2/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_2/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_2/add for ONNX node: StatefulPartitionedCall/model/add_2/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_2/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_2/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_2/add [Add] outputs: [StatefulPartitionedCall/model/add_2/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_6/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_2/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_6/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_2/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_6/Relu for ONNX node: StatefulPartitionedCall/model/activation_6/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_6/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_6/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_6/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_6/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__700 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_6/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__694 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__700 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_6/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__694 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__700:0 for ONNX tensor: QuantLinearNode__700:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__700 [QuantizeLinear] outputs: [QuantLinearNode__700:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__701 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__700:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__694 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__701 [DequantizeLinear] inputs: [QuantLinearNode__700:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__694 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__701 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd/ReadVariableOp__89 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_7/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd/ReadVariableOp__89 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp__90 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp_1__91 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp__92 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp_1__93 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp__90 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_7/ReadVariableOp_1__91 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp__92 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3/ReadVariableOp_1__93 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_7/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_7/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_7/Relu for ONNX node: StatefulPartitionedCall/model/activation_7/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_7/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_7/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_7/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_7/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__708 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_7/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__706 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__708 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_7/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__706 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__708:0 for ONNX tensor: QuantLinearNode__708:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__708 [QuantizeLinear] outputs: [QuantLinearNode__708:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__709 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__708:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__706 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__709 [DequantizeLinear] inputs: [QuantLinearNode__708:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__706 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__709 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd/ReadVariableOp__99 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_8/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd/ReadVariableOp__99 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp__100 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp_1__101 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp__102 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp_1__103 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp__100 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_8/ReadVariableOp_1__101 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp__102 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3/ReadVariableOp_1__103 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__696 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_6/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__694 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__696 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_6/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__694 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__696:0 for ONNX tensor: QuantLinearNode__696:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__696 [QuantizeLinear] outputs: [QuantLinearNode__696:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__697 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__696:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__694 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__697 [DequantizeLinear] inputs: [QuantLinearNode__696:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__694 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_3/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_3/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__697 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_3/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_3/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_3/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_3/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_3/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_3/add for ONNX node: StatefulPartitionedCall/model/add_3/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_3/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_3/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_3/add [Add] outputs: [StatefulPartitionedCall/model/add_3/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_8/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_3/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_8/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_3/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_8/Relu for ONNX node: StatefulPartitionedCall/model/activation_8/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_8/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_8/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_8/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_8/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__720 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_8/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__714 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__720 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_8/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__714 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__720:0 for ONNX tensor: QuantLinearNode__720:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__720 [QuantizeLinear] outputs: [QuantLinearNode__720:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__721 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__720:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__714 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__721 [DequantizeLinear] inputs: [QuantLinearNode__720:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__714 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__721 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd/ReadVariableOp__111 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_9/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd/ReadVariableOp__111 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp__112 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp_1__113 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp__114 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp_1__115 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp__112 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_9/ReadVariableOp_1__113 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp__114 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3/ReadVariableOp_1__115 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_9/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_9/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_9/Relu for ONNX node: StatefulPartitionedCall/model/activation_9/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_9/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_9/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_9/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_9/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__728 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_9/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__726 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__728 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_9/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__726 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__728:0 for ONNX tensor: QuantLinearNode__728:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__728 [QuantizeLinear] outputs: [QuantLinearNode__728:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__729 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__728:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__726 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__729 [DequantizeLinear] inputs: [QuantLinearNode__728:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__726 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__729 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd/ReadVariableOp__121 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_10/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd/ReadVariableOp__121 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp__122 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp_1__123 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp__124 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp_1__125 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp__122 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_10/ReadVariableOp_1__123 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp__124 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3/ReadVariableOp_1__125 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__716 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_8/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__714 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__716 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_8/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__714 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__716:0 for ONNX tensor: QuantLinearNode__716:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__716 [QuantizeLinear] outputs: [QuantLinearNode__716:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__717 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__716:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__714 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__717 [DequantizeLinear] inputs: [QuantLinearNode__716:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__714 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_4/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_4/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__717 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_4/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_4/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_4/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_4/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_4/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_4/add for ONNX node: StatefulPartitionedCall/model/add_4/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_4/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_4/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_4/add [Add] outputs: [StatefulPartitionedCall/model/add_4/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_10/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_4/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_10/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_4/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_10/Relu for ONNX node: StatefulPartitionedCall/model/activation_10/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_10/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_10/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_10/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_10/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__740 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_10/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__738 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__740 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_10/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__738 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__740:0 for ONNX tensor: QuantLinearNode__740:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__740 [QuantizeLinear] outputs: [QuantLinearNode__740:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__741 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__740:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__738 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__741 [DequantizeLinear] inputs: [QuantLinearNode__740:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__738 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__741 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd/ReadVariableOp__133 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_11/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd/ReadVariableOp__133 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp__134 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp_1__135 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp__136 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp_1__137 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp__134 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_11/ReadVariableOp_1__135 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp__136 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3/ReadVariableOp_1__137 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_11/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_11/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_11/Relu for ONNX node: StatefulPartitionedCall/model/activation_11/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_11/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_11/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_11/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_11/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__748 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_11/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__746 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__748 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_11/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__746 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__748:0 for ONNX tensor: QuantLinearNode__748:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__748 [QuantizeLinear] outputs: [QuantLinearNode__748:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__749 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__748:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__746 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__749 [DequantizeLinear] inputs: [QuantLinearNode__748:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__746 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__749 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd/ReadVariableOp__143 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_12/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd/ReadVariableOp__143 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp__144 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp_1__145 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp__146 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp_1__147 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp__144 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_12/ReadVariableOp_1__145 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp__146 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3/ReadVariableOp_1__147 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__736 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_10/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__738 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__736 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_10/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__738 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__736:0 for ONNX tensor: QuantLinearNode__736:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__736 [QuantizeLinear] outputs: [QuantLinearNode__736:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__737 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__736:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__738 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__737 [DequantizeLinear] inputs: [QuantLinearNode__736:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__738 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_5/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_5/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__737 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_5/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_5/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_5/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_5/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_5/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_5/add for ONNX node: StatefulPartitionedCall/model/add_5/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_5/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_5/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_5/add [Add] outputs: [StatefulPartitionedCall/model/add_5/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_12/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_5/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_12/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_5/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_12/Relu for ONNX node: StatefulPartitionedCall/model/activation_12/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_12/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_12/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_12/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_12/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__760 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_12/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__758 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__760 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_12/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__758 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__760:0 for ONNX tensor: QuantLinearNode__760:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__760 [QuantizeLinear] outputs: [QuantLinearNode__760:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__761 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__760:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__758 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__761 [DequantizeLinear] inputs: [QuantLinearNode__760:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__758 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__761 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd/ReadVariableOp__155 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_13/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd/ReadVariableOp__155 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp__156 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp_1__157 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp__158 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp_1__159 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp__156 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_13/ReadVariableOp_1__157 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp__158 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3/ReadVariableOp_1__159 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_13/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_13/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_13/Relu for ONNX node: StatefulPartitionedCall/model/activation_13/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_13/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_13/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_13/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_13/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__768 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_13/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__766 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__768 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_13/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__766 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__768:0 for ONNX tensor: QuantLinearNode__768:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__768 [QuantizeLinear] outputs: [QuantLinearNode__768:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__769 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__768:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__766 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__769 [DequantizeLinear] inputs: [QuantLinearNode__768:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__766 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__769 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd/ReadVariableOp__165 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_14/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd/ReadVariableOp__165 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp__166 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp_1__167 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp__168 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp_1__169 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp__166 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_14/ReadVariableOp_1__167 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp__168 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3/ReadVariableOp_1__169 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__756 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_12/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__758 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__756 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_12/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__758 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__756:0 for ONNX tensor: QuantLinearNode__756:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__756 [QuantizeLinear] outputs: [QuantLinearNode__756:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__757 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__756:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__758 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__757 [DequantizeLinear] inputs: [QuantLinearNode__756:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__758 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_6/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_6/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__757 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_6/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_6/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_6/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_6/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_6/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_6/add for ONNX node: StatefulPartitionedCall/model/add_6/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_6/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_6/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_6/add [Add] outputs: [StatefulPartitionedCall/model/add_6/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_14/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_6/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_14/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_6/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_14/Relu for ONNX node: StatefulPartitionedCall/model/activation_14/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_14/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_14/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_14/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_14/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__780 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_14/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__774 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__780 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_14/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__774 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__780:0 for ONNX tensor: QuantLinearNode__780:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__780 [QuantizeLinear] outputs: [QuantLinearNode__780:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__781 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__780:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__774 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__781 [DequantizeLinear] inputs: [QuantLinearNode__780:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__774 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__781 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd/ReadVariableOp__177 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_15/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd/ReadVariableOp__177 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp__178 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp_1__179 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp__180 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp_1__181 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp__178 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_15/ReadVariableOp_1__179 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp__180 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3/ReadVariableOp_1__181 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_15/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_15/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_15/Relu for ONNX node: StatefulPartitionedCall/model/activation_15/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_15/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_15/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_15/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_15/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__788 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_15/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__786 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__788 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_15/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__786 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__788:0 for ONNX tensor: QuantLinearNode__788:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__788 [QuantizeLinear] outputs: [QuantLinearNode__788:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__789 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__788:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__786 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__789 [DequantizeLinear] inputs: [QuantLinearNode__788:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__786 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__789 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd/ReadVariableOp__187 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_16/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd/ReadVariableOp__187 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp__188 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp_1__189 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp__190 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp_1__191 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp__188 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_16/ReadVariableOp_1__189 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp__190 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3/ReadVariableOp_1__191 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__776 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_14/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__774 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__776 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_14/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__774 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__776:0 for ONNX tensor: QuantLinearNode__776:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__776 [QuantizeLinear] outputs: [QuantLinearNode__776:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__777 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__776:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__774 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__777 [DequantizeLinear] inputs: [QuantLinearNode__776:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__774 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_7/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_7/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__777 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_7/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_7/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_7/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_7/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_7/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_7/add for ONNX node: StatefulPartitionedCall/model/add_7/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_7/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_7/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_7/add [Add] outputs: [StatefulPartitionedCall/model/add_7/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_16/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_7/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_16/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_7/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_16/Relu for ONNX node: StatefulPartitionedCall/model/activation_16/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_16/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_16/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_16/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_16/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__800 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_16/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__794 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__800 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_16/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__794 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__800:0 for ONNX tensor: QuantLinearNode__800:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__800 [QuantizeLinear] outputs: [QuantLinearNode__800:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__801 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__800:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__794 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__801 [DequantizeLinear] inputs: [QuantLinearNode__800:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__794 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__801 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd/ReadVariableOp__199 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_17/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd/ReadVariableOp__199 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp__200 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp_1__201 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp__202 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp_1__203 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp__200 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_17/ReadVariableOp_1__201 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp__202 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3/ReadVariableOp_1__203 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_17/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_17/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_17/Relu for ONNX node: StatefulPartitionedCall/model/activation_17/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_17/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_17/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_17/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_17/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__808 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_17/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__806 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__808 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_17/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__806 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__808:0 for ONNX tensor: QuantLinearNode__808:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__808 [QuantizeLinear] outputs: [QuantLinearNode__808:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__809 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__808:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__806 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__809 [DequantizeLinear] inputs: [QuantLinearNode__808:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__806 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__809 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd/ReadVariableOp__209 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_18/quantize_and_dequantize_1:0 -> (16, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd/ReadVariableOp__209 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp__210 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp_1__211 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp__212 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp_1__213 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp__210 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_18/ReadVariableOp_1__211 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp__212 -> (16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3/ReadVariableOp_1__213 -> (16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__796 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_16/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__794 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__796 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_16/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__794 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__796:0 for ONNX tensor: QuantLinearNode__796:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__796 [QuantizeLinear] outputs: [QuantLinearNode__796:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__797 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__796:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__794 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__797 [DequantizeLinear] inputs: [QuantLinearNode__796:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__794 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_8/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_8/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__797 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_8/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_8/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_8/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_8/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_8/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_8/add for ONNX node: StatefulPartitionedCall/model/add_8/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_8/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_8/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_8/add [Add] outputs: [StatefulPartitionedCall/model/add_8/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_18/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_8/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_18/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_8/add:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_18/Relu for ONNX node: StatefulPartitionedCall/model/activation_18/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_18/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_18/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_18/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_18/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__828 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_18/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__826 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__828 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_18/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__826 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__828:0 for ONNX tensor: QuantLinearNode__828:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__828 [QuantizeLinear] outputs: [QuantLinearNode__828:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__829 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__828:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__826 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__829 [DequantizeLinear] inputs: [QuantLinearNode__828:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__826 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__829 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd/ReadVariableOp__227 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_19/quantize_and_dequantize_1:0 -> (32, 16, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd/ReadVariableOp__227 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp__228 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp_1__229 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp__230 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp_1__231 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp__228 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_19/ReadVariableOp_1__229 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp__230 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3/ReadVariableOp_1__231 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_19/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_19/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_19/Relu for ONNX node: StatefulPartitionedCall/model/activation_19/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_19/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_19/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_19/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_19/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__836 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_19/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__834 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__836 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_19/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__834 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__836:0 for ONNX tensor: QuantLinearNode__836:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__836 [QuantizeLinear] outputs: [QuantLinearNode__836:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__837 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__836:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__834 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__837 [DequantizeLinear] inputs: [QuantLinearNode__836:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__834 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__837 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd/ReadVariableOp__237 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_20/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd/ReadVariableOp__237 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp__238 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp_1__239 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp__240 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp_1__241 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp__238 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_20/ReadVariableOp_1__239 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp__240 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3/ReadVariableOp_1__241 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__816 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_18/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__826 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__816 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_18/Relu:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__826 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__816:0 for ONNX tensor: QuantLinearNode__816:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__816 [QuantizeLinear] outputs: [QuantLinearNode__816:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__817 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__816:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__826 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__817 [DequantizeLinear] inputs: [QuantLinearNode__816:0 -> (-1, 16, 32, 32)[FLOAT]], [quant_scale__826 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__817 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd/ReadVariableOp__219 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize:0 -> (-1, 16, 32, 32)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_21/quantize_and_dequantize_1:0 -> (32, 16, 1, 1)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd/ReadVariableOp__219 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 16, 32, 32) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__824 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__822 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__824 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__822 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__824:0 for ONNX tensor: QuantLinearNode__824:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__824 [QuantizeLinear] outputs: [QuantLinearNode__824:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__825 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__824:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__822 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__825 [DequantizeLinear] inputs: [QuantLinearNode__824:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__822 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_9/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_9/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__825 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_9/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_9/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_9/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_9/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_9/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_9/add for ONNX node: StatefulPartitionedCall/model/add_9/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_9/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_9/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_9/add [Add] outputs: [StatefulPartitionedCall/model/add_9/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_20/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_9/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_20/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_9/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_20/Relu for ONNX node: StatefulPartitionedCall/model/activation_20/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_20/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_20/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_20/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_20/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__848 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_20/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__846 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__848 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_20/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__846 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__848:0 for ONNX tensor: QuantLinearNode__848:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__848 [QuantizeLinear] outputs: [QuantLinearNode__848:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__849 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__848:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__846 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__849 [DequantizeLinear] inputs: [QuantLinearNode__848:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__846 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__849 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd/ReadVariableOp__249 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_22/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd/ReadVariableOp__249 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp__250 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp_1__251 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp__252 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp_1__253 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp__250 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_21/ReadVariableOp_1__251 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp__252 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3/ReadVariableOp_1__253 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_21/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_21/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_21/Relu for ONNX node: StatefulPartitionedCall/model/activation_21/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_21/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_21/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_21/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_21/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__856 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_21/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__854 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__856 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_21/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__854 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__856:0 for ONNX tensor: QuantLinearNode__856:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__856 [QuantizeLinear] outputs: [QuantLinearNode__856:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__857 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__856:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__854 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__857 [DequantizeLinear] inputs: [QuantLinearNode__856:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__854 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__857 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd/ReadVariableOp__259 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_23/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd/ReadVariableOp__259 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp__260 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp_1__261 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp__262 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp_1__263 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp__260 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_22/ReadVariableOp_1__261 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp__262 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3/ReadVariableOp_1__263 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__844 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_20/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__846 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__844 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_20/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__846 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__844:0 for ONNX tensor: QuantLinearNode__844:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__844 [QuantizeLinear] outputs: [QuantLinearNode__844:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__845 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__844:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__846 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__845 [DequantizeLinear] inputs: [QuantLinearNode__844:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__846 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_10/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_10/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__845 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_10/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_10/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_10/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_10/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_10/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_10/add for ONNX node: StatefulPartitionedCall/model/add_10/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_10/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_10/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_10/add [Add] outputs: [StatefulPartitionedCall/model/add_10/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_22/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_10/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_22/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_10/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_22/Relu for ONNX node: StatefulPartitionedCall/model/activation_22/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_22/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_22/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_22/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_22/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__868 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_22/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__866 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__868 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_22/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__866 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__868:0 for ONNX tensor: QuantLinearNode__868:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__868 [QuantizeLinear] outputs: [QuantLinearNode__868:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__869 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__868:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__866 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__869 [DequantizeLinear] inputs: [QuantLinearNode__868:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__866 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__869 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd/ReadVariableOp__271 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_24/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd/ReadVariableOp__271 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp__272 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp_1__273 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp__274 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp_1__275 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp__272 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_23/ReadVariableOp_1__273 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp__274 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3/ReadVariableOp_1__275 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_23/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_23/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_23/Relu for ONNX node: StatefulPartitionedCall/model/activation_23/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_23/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_23/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_23/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_23/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__876 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_23/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__874 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__876 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_23/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__874 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__876:0 for ONNX tensor: QuantLinearNode__876:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__876 [QuantizeLinear] outputs: [QuantLinearNode__876:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__877 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__876:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__874 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__877 [DequantizeLinear] inputs: [QuantLinearNode__876:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__874 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__877 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd/ReadVariableOp__281 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_25/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd/ReadVariableOp__281 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp__282 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp_1__283 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp__284 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp_1__285 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp__282 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_24/ReadVariableOp_1__283 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp__284 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3/ReadVariableOp_1__285 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__864 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_22/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__866 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__864 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_22/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__866 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__864:0 for ONNX tensor: QuantLinearNode__864:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__864 [QuantizeLinear] outputs: [QuantLinearNode__864:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__865 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__864:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__866 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__865 [DequantizeLinear] inputs: [QuantLinearNode__864:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__866 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_11/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_11/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__865 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_11/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_11/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_11/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_11/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_11/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_11/add for ONNX node: StatefulPartitionedCall/model/add_11/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_11/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_11/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_11/add [Add] outputs: [StatefulPartitionedCall/model/add_11/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_24/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_11/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_24/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_11/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_24/Relu for ONNX node: StatefulPartitionedCall/model/activation_24/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_24/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_24/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_24/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_24/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__888 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_24/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__886 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__888 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_24/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__886 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__888:0 for ONNX tensor: QuantLinearNode__888:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__888 [QuantizeLinear] outputs: [QuantLinearNode__888:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__889 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__888:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__886 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__889 [DequantizeLinear] inputs: [QuantLinearNode__888:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__886 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__889 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd/ReadVariableOp__293 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_26/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd/ReadVariableOp__293 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp__294 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp_1__295 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp__296 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp_1__297 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp__294 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_25/ReadVariableOp_1__295 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp__296 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3/ReadVariableOp_1__297 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_25/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_25/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_25/Relu for ONNX node: StatefulPartitionedCall/model/activation_25/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_25/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_25/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_25/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_25/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__896 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_25/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__894 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__896 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_25/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__894 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__896:0 for ONNX tensor: QuantLinearNode__896:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__896 [QuantizeLinear] outputs: [QuantLinearNode__896:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__897 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__896:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__894 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__897 [DequantizeLinear] inputs: [QuantLinearNode__896:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__894 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__897 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd/ReadVariableOp__303 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_27/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd/ReadVariableOp__303 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp__304 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp_1__305 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp__306 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp_1__307 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp__304 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_26/ReadVariableOp_1__305 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp__306 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3/ReadVariableOp_1__307 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__884 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_24/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__886 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__884 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_24/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__886 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__884:0 for ONNX tensor: QuantLinearNode__884:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__884 [QuantizeLinear] outputs: [QuantLinearNode__884:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__885 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__884:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__886 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__885 [DequantizeLinear] inputs: [QuantLinearNode__884:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__886 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_12/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_12/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__885 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_12/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_12/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_12/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_12/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_12/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_12/add for ONNX node: StatefulPartitionedCall/model/add_12/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_12/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_12/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_12/add [Add] outputs: [StatefulPartitionedCall/model/add_12/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_26/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_12/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_26/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_12/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_26/Relu for ONNX node: StatefulPartitionedCall/model/activation_26/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_26/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_26/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_26/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_26/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__908 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_26/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__902 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__908 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_26/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__902 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__908:0 for ONNX tensor: QuantLinearNode__908:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__908 [QuantizeLinear] outputs: [QuantLinearNode__908:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__909 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__908:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__902 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__909 [DequantizeLinear] inputs: [QuantLinearNode__908:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__902 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__909 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd/ReadVariableOp__315 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_28/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd/ReadVariableOp__315 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp__316 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp_1__317 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp__318 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp_1__319 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp__316 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_27/ReadVariableOp_1__317 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp__318 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3/ReadVariableOp_1__319 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_27/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_27/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_27/Relu for ONNX node: StatefulPartitionedCall/model/activation_27/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_27/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_27/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_27/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_27/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__916 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_27/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__914 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__916 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_27/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__914 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__916:0 for ONNX tensor: QuantLinearNode__916:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__916 [QuantizeLinear] outputs: [QuantLinearNode__916:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__917 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__916:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__914 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__917 [DequantizeLinear] inputs: [QuantLinearNode__916:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__914 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__917 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd/ReadVariableOp__325 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_29/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd/ReadVariableOp__325 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp__326 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp_1__327 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp__328 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp_1__329 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp__326 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_28/ReadVariableOp_1__327 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp__328 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3/ReadVariableOp_1__329 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__904 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_26/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__902 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__904 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_26/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__902 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__904:0 for ONNX tensor: QuantLinearNode__904:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__904 [QuantizeLinear] outputs: [QuantLinearNode__904:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__905 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__904:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__902 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__905 [DequantizeLinear] inputs: [QuantLinearNode__904:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__902 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_13/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_13/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__905 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_13/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_13/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_13/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_13/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_13/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_13/add for ONNX node: StatefulPartitionedCall/model/add_13/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_13/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_13/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_13/add [Add] outputs: [StatefulPartitionedCall/model/add_13/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_28/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_13/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_28/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_13/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_28/Relu for ONNX node: StatefulPartitionedCall/model/activation_28/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_28/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_28/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_28/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_28/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__928 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_28/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__926 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__928 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_28/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__926 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__928:0 for ONNX tensor: QuantLinearNode__928:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__928 [QuantizeLinear] outputs: [QuantLinearNode__928:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__929 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__928:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__926 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__929 [DequantizeLinear] inputs: [QuantLinearNode__928:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__926 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__929 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd/ReadVariableOp__337 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_30/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd/ReadVariableOp__337 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp__338 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp_1__339 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp__340 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp_1__341 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp__338 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_29/ReadVariableOp_1__339 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp__340 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3/ReadVariableOp_1__341 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_29/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_29/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_29/Relu for ONNX node: StatefulPartitionedCall/model/activation_29/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_29/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_29/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_29/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_29/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__936 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_29/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__934 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__936 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_29/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__934 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__936:0 for ONNX tensor: QuantLinearNode__936:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__936 [QuantizeLinear] outputs: [QuantLinearNode__936:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__937 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__936:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__934 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__937 [DequantizeLinear] inputs: [QuantLinearNode__936:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__934 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__937 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd/ReadVariableOp__347 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_31/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd/ReadVariableOp__347 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp__348 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp_1__349 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp__350 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp_1__351 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp__348 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_30/ReadVariableOp_1__349 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp__350 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3/ReadVariableOp_1__351 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__924 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_28/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__926 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__924 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_28/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__926 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__924:0 for ONNX tensor: QuantLinearNode__924:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__924 [QuantizeLinear] outputs: [QuantLinearNode__924:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__925 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__924:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__926 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__925 [DequantizeLinear] inputs: [QuantLinearNode__924:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__926 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_14/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_14/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__925 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_14/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_14/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_14/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_14/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_14/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_14/add for ONNX node: StatefulPartitionedCall/model/add_14/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_14/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_14/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_14/add [Add] outputs: [StatefulPartitionedCall/model/add_14/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_30/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_14/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_30/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_14/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_30/Relu for ONNX node: StatefulPartitionedCall/model/activation_30/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_30/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_30/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_30/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_30/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__948 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_30/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__942 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__948 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_30/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__942 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__948:0 for ONNX tensor: QuantLinearNode__948:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__948 [QuantizeLinear] outputs: [QuantLinearNode__948:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__949 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__948:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__942 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__949 [DequantizeLinear] inputs: [QuantLinearNode__948:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__942 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__949 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd/ReadVariableOp__359 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_32/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd/ReadVariableOp__359 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp__360 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp_1__361 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp__362 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp_1__363 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp__360 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_31/ReadVariableOp_1__361 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp__362 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3/ReadVariableOp_1__363 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_31/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_31/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_31/Relu for ONNX node: StatefulPartitionedCall/model/activation_31/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_31/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_31/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_31/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_31/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__956 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_31/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__954 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__956 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_31/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__954 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__956:0 for ONNX tensor: QuantLinearNode__956:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__956 [QuantizeLinear] outputs: [QuantLinearNode__956:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__957 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__956:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__954 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__957 [DequantizeLinear] inputs: [QuantLinearNode__956:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__954 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__957 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd/ReadVariableOp__369 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_33/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd/ReadVariableOp__369 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp__370 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp_1__371 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp__372 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp_1__373 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp__370 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_32/ReadVariableOp_1__371 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp__372 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3/ReadVariableOp_1__373 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__944 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_30/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__942 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__944 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_30/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__942 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__944:0 for ONNX tensor: QuantLinearNode__944:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__944 [QuantizeLinear] outputs: [QuantLinearNode__944:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__945 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__944:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__942 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__945 [DequantizeLinear] inputs: [QuantLinearNode__944:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__942 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_15/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_15/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__945 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_15/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_15/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_15/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_15/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_15/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_15/add for ONNX node: StatefulPartitionedCall/model/add_15/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_15/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_15/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_15/add [Add] outputs: [StatefulPartitionedCall/model/add_15/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_32/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_15/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_32/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_15/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_32/Relu for ONNX node: StatefulPartitionedCall/model/activation_32/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_32/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_32/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_32/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_32/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__968 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_32/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__966 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__968 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_32/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__966 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__968:0 for ONNX tensor: QuantLinearNode__968:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__968 [QuantizeLinear] outputs: [QuantLinearNode__968:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__969 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__968:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__966 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__969 [DequantizeLinear] inputs: [QuantLinearNode__968:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__966 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__969 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd/ReadVariableOp__381 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_34/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd/ReadVariableOp__381 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp__382 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp_1__383 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp__384 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp_1__385 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp__382 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_33/ReadVariableOp_1__383 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp__384 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3/ReadVariableOp_1__385 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_33/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_33/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_33/Relu for ONNX node: StatefulPartitionedCall/model/activation_33/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_33/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_33/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_33/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_33/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__976 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_33/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__974 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__976 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_33/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__974 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__976:0 for ONNX tensor: QuantLinearNode__976:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__976 [QuantizeLinear] outputs: [QuantLinearNode__976:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__977 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__976:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__974 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__977 [DequantizeLinear] inputs: [QuantLinearNode__976:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__974 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__977 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd/ReadVariableOp__391 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_35/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd/ReadVariableOp__391 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp__392 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp_1__393 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp__394 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp_1__395 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp__392 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_34/ReadVariableOp_1__393 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp__394 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3/ReadVariableOp_1__395 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__964 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_32/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__966 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__964 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_32/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__966 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__964:0 for ONNX tensor: QuantLinearNode__964:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__964 [QuantizeLinear] outputs: [QuantLinearNode__964:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__965 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__964:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__966 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__965 [DequantizeLinear] inputs: [QuantLinearNode__964:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__966 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_16/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_16/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__965 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_16/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_16/add [Add] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_16/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_16/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_16/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_16/add for ONNX node: StatefulPartitionedCall/model/add_16/add [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_16/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_16/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/add_16/add [Add] outputs: [StatefulPartitionedCall/model/add_16/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_34/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_16/add:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_34/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_16/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_34/Relu for ONNX node: StatefulPartitionedCall/model/activation_34/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_34/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_34/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_34/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_34/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__988 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_34/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__986 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__988 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_34/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__986 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__988:0 for ONNX tensor: QuantLinearNode__988:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__988 [QuantizeLinear] outputs: [QuantLinearNode__988:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__989 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__988:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__986 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__989 [DequantizeLinear] inputs: [QuantLinearNode__988:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__986 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__989 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [Conv] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd/ReadVariableOp__403 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_36/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd/ReadVariableOp__403 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:36] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp__404 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp_1__405 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp__406 [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp_1__407 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp__404 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_35/ReadVariableOp_1__405 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp__406 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3/ReadVariableOp_1__407 -> (32)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_35/Relu [Relu] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_35/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_35/Relu for ONNX node: StatefulPartitionedCall/model/activation_35/Relu [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_35/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_35/Relu:0 [10/04/2021-21:34:36] [V] [TRT] StatefulPartitionedCall/model/activation_35/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_35/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__996 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_35/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__994 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__996 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_35/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__994 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__996:0 for ONNX tensor: QuantLinearNode__996:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__996 [QuantizeLinear] outputs: [QuantLinearNode__996:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__997 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__996:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__994 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__997 [DequantizeLinear] inputs: [QuantLinearNode__996:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__994 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__997 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: QuantLinearNode__984 [QuantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_34/Relu:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__986 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__984 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_34/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__986 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: QuantLinearNode__984:0 for ONNX tensor: QuantLinearNode__984:0 [10/04/2021-21:34:36] [V] [TRT] QuantLinearNode__984 [QuantizeLinear] outputs: [QuantLinearNode__984:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__985 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__984:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__986 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__985 [DequantizeLinear] inputs: [QuantLinearNode__984:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__986 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_17/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_17/quantize_and_dequantize:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__985 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_17/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__1189 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__1188:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1186 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1189 [DequantizeLinear] inputs: [QuantLinearNode__1188:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1186 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1189 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__1181 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__1180:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1178 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1181 [DequantizeLinear] inputs: [QuantLinearNode__1180:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1178 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1181 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__1169 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__1168:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1166 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1169 [DequantizeLinear] inputs: [QuantLinearNode__1168:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1166 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1169 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__1161 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__1160:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1158 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1161 [DequantizeLinear] inputs: [QuantLinearNode__1160:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1158 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1161 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__1149 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__1148:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1146 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1149 [DequantizeLinear] inputs: [QuantLinearNode__1148:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1146 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1149 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__1141 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__1140:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1138 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1141 [DequantizeLinear] inputs: [QuantLinearNode__1140:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1138 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:36] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize_1:0 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1141 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:36] [V] [TRT] Parsing node: DequantLinearNode__1129 [DequantizeLinear] [10/04/2021-21:34:36] [V] [TRT] Searching for input: QuantLinearNode__1128:0 [10/04/2021-21:34:36] [V] [TRT] Searching for input: quant_scale__1126 [10/04/2021-21:34:36] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:36] [V] [TRT] DequantLinearNode__1129 [DequantizeLinear] inputs: [QuantLinearNode__1128:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1126 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1129 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1121 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1120:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1118 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1121 [DequantizeLinear] inputs: [QuantLinearNode__1120:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1118 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1121 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1109 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1108:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1106 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1109 [DequantizeLinear] inputs: [QuantLinearNode__1108:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1106 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1109 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1101 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1100:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1098 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1101 [DequantizeLinear] inputs: [QuantLinearNode__1100:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1098 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1101 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1089 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1088:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1086 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1089 [DequantizeLinear] inputs: [QuantLinearNode__1088:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1086 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1089 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1081 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1080:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1078 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1081 [DequantizeLinear] inputs: [QuantLinearNode__1080:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1078 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1081 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1069 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1068:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1066 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1069 [DequantizeLinear] inputs: [QuantLinearNode__1068:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1066 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1069 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1061 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1060:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1058 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1061 [DequantizeLinear] inputs: [QuantLinearNode__1060:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1058 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1061 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1049 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1048:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1046 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1049 [DequantizeLinear] inputs: [QuantLinearNode__1048:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1046 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1049 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1041 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1040:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1038 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1041 [DequantizeLinear] inputs: [QuantLinearNode__1040:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1038 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1041 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1029 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1028:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1026 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1029 [DequantizeLinear] inputs: [QuantLinearNode__1028:0 -> (64, 64, 3, 3)[FLOAT]], [quant_scale__1026 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1029 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1021 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1020:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1018 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1021 [DequantizeLinear] inputs: [QuantLinearNode__1020:0 -> (64, 32, 3, 3)[FLOAT]], [quant_scale__1018 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1021 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize_1:0 -> (64, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1009 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1008:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1006 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1107 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1009 [DequantizeLinear] inputs: [QuantLinearNode__1008:0 -> (64, 32, 1, 1)[FLOAT]], [quant_scale__1006 -> (64)[FLOAT]], [zero_point__1107 -> (64)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1009 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize_1:0 -> (64, 32, 1, 1)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1001 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1000:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__998 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__919 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1001 [DequantizeLinear] inputs: [QuantLinearNode__1000:0 -> (32, 32, 3, 3)[FLOAT]], [quant_scale__998 -> (32)[FLOAT]], [zero_point__919 -> (32)[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize_1:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1001 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd/ReadVariableOp__413 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_37/quantize_and_dequantize_1:0 -> (32, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd/ReadVariableOp__413 -> (32)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp__414 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp_1__415 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp__416 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp_1__417 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp__414 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_36/ReadVariableOp_1__415 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp__416 -> (32)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3/ReadVariableOp_1__417 -> (32)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_17/add [Add] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_17/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_17/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_17/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_17/add for ONNX node: StatefulPartitionedCall/model/add_17/add [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_17/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_17/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_17/add [Add] outputs: [StatefulPartitionedCall/model/add_17/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_36/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_17/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_36/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_17/add:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_36/Relu for ONNX node: StatefulPartitionedCall/model/activation_36/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_36/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_36/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_36/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_36/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1016 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_36/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1002 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1016 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_36/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__1002 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1016:0 for ONNX tensor: QuantLinearNode__1016:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1016 [QuantizeLinear] outputs: [QuantLinearNode__1016:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1017 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1016:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1002 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1017 [DequantizeLinear] inputs: [QuantLinearNode__1016:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__1002 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1017 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd/ReadVariableOp__431 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_38/quantize_and_dequantize_1:0 -> (64, 32, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd/ReadVariableOp__431 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp__432 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp_1__433 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp__434 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp_1__435 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp__432 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_37/ReadVariableOp_1__433 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp__434 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3/ReadVariableOp_1__435 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_37/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_37/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_37/Relu for ONNX node: StatefulPartitionedCall/model/activation_37/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_37/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_37/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_37/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_37/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1024 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_37/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1022 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1024 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_37/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1022 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1024:0 for ONNX tensor: QuantLinearNode__1024:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1024 [QuantizeLinear] outputs: [QuantLinearNode__1024:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1025 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1024:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1022 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1025 [DequantizeLinear] inputs: [QuantLinearNode__1024:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1022 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1025 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd/ReadVariableOp__441 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_39/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd/ReadVariableOp__441 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp__442 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp_1__443 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp__444 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp_1__445 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp__442 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_38/ReadVariableOp_1__443 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp__444 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3/ReadVariableOp_1__445 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1004 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_36/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1002 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1004 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_36/Relu:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__1002 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1004:0 for ONNX tensor: QuantLinearNode__1004:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1004 [QuantizeLinear] outputs: [QuantLinearNode__1004:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1005 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1004:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1002 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1005 [DequantizeLinear] inputs: [QuantLinearNode__1004:0 -> (-1, 32, 16, 16)[FLOAT]], [quant_scale__1002 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1005 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd/ReadVariableOp__423 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize:0 -> (-1, 32, 16, 16)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_40/quantize_and_dequantize_1:0 -> (64, 32, 1, 1)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd/ReadVariableOp__423 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 32, 16, 16) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1012 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1010 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1012 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1010 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1012:0 for ONNX tensor: QuantLinearNode__1012:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1012 [QuantizeLinear] outputs: [QuantLinearNode__1012:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1013 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1012:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1010 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1013 [DequantizeLinear] inputs: [QuantLinearNode__1012:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1010 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_18/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_18/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1013 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_18/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_18/add [Add] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_18/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_18/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_18/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_18/add for ONNX node: StatefulPartitionedCall/model/add_18/add [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_18/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_18/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_18/add [Add] outputs: [StatefulPartitionedCall/model/add_18/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_38/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_18/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_38/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_18/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_38/Relu for ONNX node: StatefulPartitionedCall/model/activation_38/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_38/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_38/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_38/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_38/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1036 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_38/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1034 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1036 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_38/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1034 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1036:0 for ONNX tensor: QuantLinearNode__1036:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1036 [QuantizeLinear] outputs: [QuantLinearNode__1036:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1037 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1036:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1034 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1037 [DequantizeLinear] inputs: [QuantLinearNode__1036:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1034 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1037 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd/ReadVariableOp__453 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_41/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd/ReadVariableOp__453 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp__454 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp_1__455 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp__456 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp_1__457 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp__454 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_39/ReadVariableOp_1__455 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp__456 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3/ReadVariableOp_1__457 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_39/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_39/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_39/Relu for ONNX node: StatefulPartitionedCall/model/activation_39/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_39/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_39/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_39/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_39/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1044 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_39/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1042 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1044 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_39/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1042 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1044:0 for ONNX tensor: QuantLinearNode__1044:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1044 [QuantizeLinear] outputs: [QuantLinearNode__1044:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1045 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1044:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1042 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1045 [DequantizeLinear] inputs: [QuantLinearNode__1044:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1042 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1045 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd/ReadVariableOp__463 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_42/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd/ReadVariableOp__463 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp__464 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp_1__465 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp__466 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp_1__467 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp__464 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_40/ReadVariableOp_1__465 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp__466 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3/ReadVariableOp_1__467 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1032 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_38/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1034 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1032 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_38/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1034 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1032:0 for ONNX tensor: QuantLinearNode__1032:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1032 [QuantizeLinear] outputs: [QuantLinearNode__1032:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1033 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1032:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1034 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1033 [DequantizeLinear] inputs: [QuantLinearNode__1032:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1034 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_19/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_19/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1033 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_19/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_19/add [Add] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_19/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_19/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_19/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_19/add for ONNX node: StatefulPartitionedCall/model/add_19/add [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_19/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_19/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_19/add [Add] outputs: [StatefulPartitionedCall/model/add_19/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_40/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_19/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_40/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_19/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_40/Relu for ONNX node: StatefulPartitionedCall/model/activation_40/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_40/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_40/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_40/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_40/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1056 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_40/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1054 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1056 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_40/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1054 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1056:0 for ONNX tensor: QuantLinearNode__1056:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1056 [QuantizeLinear] outputs: [QuantLinearNode__1056:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1057 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1056:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1054 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1057 [DequantizeLinear] inputs: [QuantLinearNode__1056:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1054 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1057 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd/ReadVariableOp__475 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_43/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd/ReadVariableOp__475 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp__476 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp_1__477 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp__478 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp_1__479 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp__476 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_41/ReadVariableOp_1__477 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp__478 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3/ReadVariableOp_1__479 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_41/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_41/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_41/Relu for ONNX node: StatefulPartitionedCall/model/activation_41/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_41/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_41/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_41/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_41/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1064 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_41/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1062 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1064 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_41/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1062 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1064:0 for ONNX tensor: QuantLinearNode__1064:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1064 [QuantizeLinear] outputs: [QuantLinearNode__1064:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1065 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1064:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1062 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1065 [DequantizeLinear] inputs: [QuantLinearNode__1064:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1062 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1065 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd/ReadVariableOp__485 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_44/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd/ReadVariableOp__485 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp__486 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp_1__487 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp__488 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp_1__489 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp__486 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_42/ReadVariableOp_1__487 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp__488 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3/ReadVariableOp_1__489 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1052 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_40/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1054 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1052 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_40/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1054 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1052:0 for ONNX tensor: QuantLinearNode__1052:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1052 [QuantizeLinear] outputs: [QuantLinearNode__1052:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1053 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1052:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1054 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1053 [DequantizeLinear] inputs: [QuantLinearNode__1052:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1054 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_20/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_20/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1053 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_20/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_20/add [Add] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_20/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_20/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_20/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_20/add for ONNX node: StatefulPartitionedCall/model/add_20/add [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_20/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_20/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_20/add [Add] outputs: [StatefulPartitionedCall/model/add_20/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_42/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_20/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_42/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_20/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_42/Relu for ONNX node: StatefulPartitionedCall/model/activation_42/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_42/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_42/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_42/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_42/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1076 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_42/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1074 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1076 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_42/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1074 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1076:0 for ONNX tensor: QuantLinearNode__1076:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1076 [QuantizeLinear] outputs: [QuantLinearNode__1076:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1077 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1076:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1074 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1077 [DequantizeLinear] inputs: [QuantLinearNode__1076:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1074 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1077 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd/ReadVariableOp__497 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_45/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd/ReadVariableOp__497 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp__498 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp_1__499 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp__500 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp_1__501 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp__498 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_43/ReadVariableOp_1__499 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp__500 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3/ReadVariableOp_1__501 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_43/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_43/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_43/Relu for ONNX node: StatefulPartitionedCall/model/activation_43/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_43/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_43/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_43/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_43/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1084 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_43/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1082 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1084 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_43/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1082 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1084:0 for ONNX tensor: QuantLinearNode__1084:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1084 [QuantizeLinear] outputs: [QuantLinearNode__1084:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1085 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1084:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1082 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1085 [DequantizeLinear] inputs: [QuantLinearNode__1084:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1082 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1085 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd/ReadVariableOp__507 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_46/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd/ReadVariableOp__507 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp__508 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp_1__509 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp__510 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp_1__511 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp__508 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_44/ReadVariableOp_1__509 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp__510 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3/ReadVariableOp_1__511 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1072 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_42/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1074 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1072 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_42/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1074 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1072:0 for ONNX tensor: QuantLinearNode__1072:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1072 [QuantizeLinear] outputs: [QuantLinearNode__1072:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1073 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1072:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1074 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1073 [DequantizeLinear] inputs: [QuantLinearNode__1072:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1074 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_21/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_21/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1073 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_21/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_21/add [Add] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_21/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_21/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_21/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_21/add for ONNX node: StatefulPartitionedCall/model/add_21/add [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_21/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_21/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_21/add [Add] outputs: [StatefulPartitionedCall/model/add_21/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_44/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_21/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_44/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_21/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_44/Relu for ONNX node: StatefulPartitionedCall/model/activation_44/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_44/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_44/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_44/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_44/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1096 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_44/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1094 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1096 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_44/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1094 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1096:0 for ONNX tensor: QuantLinearNode__1096:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1096 [QuantizeLinear] outputs: [QuantLinearNode__1096:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1097 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1096:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1094 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1097 [DequantizeLinear] inputs: [QuantLinearNode__1096:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1094 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1097 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd/ReadVariableOp__519 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_47/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd/ReadVariableOp__519 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp__520 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp_1__521 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp__522 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp_1__523 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp__520 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_45/ReadVariableOp_1__521 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp__522 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3/ReadVariableOp_1__523 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_45/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_45/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_45/Relu for ONNX node: StatefulPartitionedCall/model/activation_45/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_45/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_45/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_45/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_45/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1104 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_45/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1102 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1104 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_45/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1102 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1104:0 for ONNX tensor: QuantLinearNode__1104:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1104 [QuantizeLinear] outputs: [QuantLinearNode__1104:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1105 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1104:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1102 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1105 [DequantizeLinear] inputs: [QuantLinearNode__1104:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1102 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1105 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd/ReadVariableOp__529 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_48/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd/ReadVariableOp__529 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp__530 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp_1__531 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp__532 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp_1__533 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp__530 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_46/ReadVariableOp_1__531 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp__532 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3/ReadVariableOp_1__533 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1092 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_44/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1094 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1092 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_44/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1094 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1092:0 for ONNX tensor: QuantLinearNode__1092:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1092 [QuantizeLinear] outputs: [QuantLinearNode__1092:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1093 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1092:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1094 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1093 [DequantizeLinear] inputs: [QuantLinearNode__1092:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1094 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_22/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_22/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1093 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_22/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_22/add [Add] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_22/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_22/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_22/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_22/add for ONNX node: StatefulPartitionedCall/model/add_22/add [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_22/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_22/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_22/add [Add] outputs: [StatefulPartitionedCall/model/add_22/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_46/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_22/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_46/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_22/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_46/Relu for ONNX node: StatefulPartitionedCall/model/activation_46/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_46/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_46/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_46/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_46/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1116 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_46/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1114 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1116 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_46/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1114 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1116:0 for ONNX tensor: QuantLinearNode__1116:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1116 [QuantizeLinear] outputs: [QuantLinearNode__1116:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1117 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1116:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1114 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1117 [DequantizeLinear] inputs: [QuantLinearNode__1116:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1114 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1117 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd/ReadVariableOp__541 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_49/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd/ReadVariableOp__541 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp__542 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp_1__543 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp__544 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp_1__545 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp__542 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_47/ReadVariableOp_1__543 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp__544 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3/ReadVariableOp_1__545 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_47/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_47/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_47/Relu for ONNX node: StatefulPartitionedCall/model/activation_47/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_47/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_47/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_47/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_47/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1124 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_47/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1122 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1124 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_47/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1122 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1124:0 for ONNX tensor: QuantLinearNode__1124:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1124 [QuantizeLinear] outputs: [QuantLinearNode__1124:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1125 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1124:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1122 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1125 [DequantizeLinear] inputs: [QuantLinearNode__1124:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1122 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1125 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd/ReadVariableOp__551 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_50/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd/ReadVariableOp__551 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp__552 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp_1__553 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp__554 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp_1__555 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp__552 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_48/ReadVariableOp_1__553 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp__554 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3/ReadVariableOp_1__555 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1112 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_46/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1114 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1112 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_46/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1114 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1112:0 for ONNX tensor: QuantLinearNode__1112:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1112 [QuantizeLinear] outputs: [QuantLinearNode__1112:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1113 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1112:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1114 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1113 [DequantizeLinear] inputs: [QuantLinearNode__1112:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1114 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_23/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_23/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1113 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_23/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_23/add [Add] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_23/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_23/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_23/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_23/add for ONNX node: StatefulPartitionedCall/model/add_23/add [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_23/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_23/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_23/add [Add] outputs: [StatefulPartitionedCall/model/add_23/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_48/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_23/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_48/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_23/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_48/Relu for ONNX node: StatefulPartitionedCall/model/activation_48/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_48/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_48/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_48/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_48/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1136 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_48/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1130 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1136 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_48/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1130 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1136:0 for ONNX tensor: QuantLinearNode__1136:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1136 [QuantizeLinear] outputs: [QuantLinearNode__1136:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1137 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1136:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1130 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1137 [DequantizeLinear] inputs: [QuantLinearNode__1136:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1130 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1137 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd/ReadVariableOp__563 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_51/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd/ReadVariableOp__563 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp__564 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp_1__565 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp__566 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp_1__567 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp__564 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_49/ReadVariableOp_1__565 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp__566 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3/ReadVariableOp_1__567 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_49/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_49/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_49/Relu for ONNX node: StatefulPartitionedCall/model/activation_49/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_49/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_49/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_49/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_49/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1144 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_49/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1142 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1144 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_49/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1142 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1144:0 for ONNX tensor: QuantLinearNode__1144:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1144 [QuantizeLinear] outputs: [QuantLinearNode__1144:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1145 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1144:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1142 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1145 [DequantizeLinear] inputs: [QuantLinearNode__1144:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1142 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1145 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd/ReadVariableOp__573 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_52/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd/ReadVariableOp__573 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp__574 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp_1__575 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp__576 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp_1__577 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp__574 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_50/ReadVariableOp_1__575 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp__576 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3/ReadVariableOp_1__577 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1132 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_48/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1130 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1132 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_48/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1130 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1132:0 for ONNX tensor: QuantLinearNode__1132:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1132 [QuantizeLinear] outputs: [QuantLinearNode__1132:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1133 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1132:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1130 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1133 [DequantizeLinear] inputs: [QuantLinearNode__1132:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1130 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_24/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_24/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1133 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_24/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_24/add [Add] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_24/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_24/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_24/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_24/add for ONNX node: StatefulPartitionedCall/model/add_24/add [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_24/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_24/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_24/add [Add] outputs: [StatefulPartitionedCall/model/add_24/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_50/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_24/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_50/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_24/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_50/Relu for ONNX node: StatefulPartitionedCall/model/activation_50/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_50/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_50/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_50/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_50/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1156 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_50/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1154 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1156 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_50/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1154 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1156:0 for ONNX tensor: QuantLinearNode__1156:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1156 [QuantizeLinear] outputs: [QuantLinearNode__1156:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1157 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1156:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1154 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1157 [DequantizeLinear] inputs: [QuantLinearNode__1156:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1154 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1157 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd/ReadVariableOp__585 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_53/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd/ReadVariableOp__585 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp__586 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp_1__587 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp__588 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp_1__589 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp__586 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_51/ReadVariableOp_1__587 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp__588 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3/ReadVariableOp_1__589 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_51/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_51/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_51/Relu for ONNX node: StatefulPartitionedCall/model/activation_51/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_51/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_51/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_51/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_51/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1164 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_51/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1162 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1164 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_51/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1162 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1164:0 for ONNX tensor: QuantLinearNode__1164:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1164 [QuantizeLinear] outputs: [QuantLinearNode__1164:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1165 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1164:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1162 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1165 [DequantizeLinear] inputs: [QuantLinearNode__1164:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1162 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1165 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd/ReadVariableOp__595 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_54/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd/ReadVariableOp__595 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp__596 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp_1__597 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp__598 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp_1__599 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp__596 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_52/ReadVariableOp_1__597 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp__598 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3/ReadVariableOp_1__599 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1152 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_50/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1154 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1152 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_50/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1154 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1152:0 for ONNX tensor: QuantLinearNode__1152:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1152 [QuantizeLinear] outputs: [QuantLinearNode__1152:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1153 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1152:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1154 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1153 [DequantizeLinear] inputs: [QuantLinearNode__1152:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1154 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_25/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_25/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1153 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_25/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_25/add [Add] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_25/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_25/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_25/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_25/add for ONNX node: StatefulPartitionedCall/model/add_25/add [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_25/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_25/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_25/add [Add] outputs: [StatefulPartitionedCall/model/add_25/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_52/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_25/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_52/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_25/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_52/Relu for ONNX node: StatefulPartitionedCall/model/activation_52/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_52/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_52/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_52/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_52/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1176 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_52/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1174 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1176 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_52/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1174 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1176:0 for ONNX tensor: QuantLinearNode__1176:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1176 [QuantizeLinear] outputs: [QuantLinearNode__1176:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1177 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1176:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1174 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1177 [DequantizeLinear] inputs: [QuantLinearNode__1176:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1174 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1177 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd/ReadVariableOp__607 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_55/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd/ReadVariableOp__607 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp__608 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp_1__609 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp__610 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp_1__611 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp__608 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_53/ReadVariableOp_1__609 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp__610 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3/ReadVariableOp_1__611 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_53/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_53/Relu [Relu] inputs: [StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_53/Relu for ONNX node: StatefulPartitionedCall/model/activation_53/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_53/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_53/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_53/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_53/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1184 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_53/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1182 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1184 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_53/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1182 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1184:0 for ONNX tensor: QuantLinearNode__1184:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1184 [QuantizeLinear] outputs: [QuantLinearNode__1184:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1185 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1184:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1182 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1185 [DequantizeLinear] inputs: [QuantLinearNode__1184:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1182 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1185 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [Conv] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize_1:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd/ReadVariableOp__617 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [Conv] inputs: [StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_56/quantize_and_dequantize_1:0 -> (64, 64, 3, 3)[FLOAT]], [StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd/ReadVariableOp__617 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [10/04/2021-21:34:37] [V] [TRT] Kernel weights are not set yet. Kernel weights must be set using setInput(1, kernel_tensor) API call. [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd for ONNX node: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [Conv] outputs: [StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 [BatchNormalization] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp__618 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp_1__619 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp__620 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp_1__621 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp__618 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_54/ReadVariableOp_1__619 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp__620 -> (64)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3/ReadVariableOp_1__621 -> (64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: QuantLinearNode__1172 [QuantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_52/Relu:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1174 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1172 [QuantizeLinear] inputs: [StatefulPartitionedCall/model/activation_52/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1174 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: QuantLinearNode__1172:0 for ONNX tensor: QuantLinearNode__1172:0 [10/04/2021-21:34:37] [V] [TRT] QuantLinearNode__1172 [QuantizeLinear] outputs: [QuantLinearNode__1172:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: DequantLinearNode__1173 [DequantizeLinear] [10/04/2021-21:34:37] [V] [TRT] Searching for input: QuantLinearNode__1172:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: quant_scale__1174 [10/04/2021-21:34:37] [V] [TRT] Searching for input: zero_point__1163 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1173 [DequantizeLinear] inputs: [QuantLinearNode__1172:0 -> (-1, 64, 8, 8)[FLOAT]], [quant_scale__1174 -> ()[FLOAT]], [zero_point__1163 -> ()[INT8]], [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0 for ONNX tensor: StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] DequantLinearNode__1173 [DequantizeLinear] outputs: [StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/add_26/add [Add] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_26/add [Add] inputs: [StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/add_26/add for ONNX node: StatefulPartitionedCall/model/add_26/add [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/add_26/add:0 for ONNX tensor: StatefulPartitionedCall/model/add_26/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/add_26/add [Add] outputs: [StatefulPartitionedCall/model/add_26/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/activation_54/Relu [Relu] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/add_26/add:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_54/Relu [Relu] inputs: [StatefulPartitionedCall/model/add_26/add:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/activation_54/Relu for ONNX node: StatefulPartitionedCall/model/activation_54/Relu [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/activation_54/Relu:0 for ONNX tensor: StatefulPartitionedCall/model/activation_54/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/activation_54/Relu [Relu] outputs: [StatefulPartitionedCall/model/activation_54/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/global_average_pooling2d/Mean [GlobalAveragePool] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/activation_54/Relu:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/global_average_pooling2d/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model/activation_54/Relu:0 -> (-1, 64, 8, 8)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/global_average_pooling2d/Mean for ONNX node: StatefulPartitionedCall/model/global_average_pooling2d/Mean [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/global_average_pooling2d/Mean:0 for ONNX tensor: StatefulPartitionedCall/model/global_average_pooling2d/Mean:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/global_average_pooling2d/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model/global_average_pooling2d/Mean:0 -> (-1, 64, 1, 1)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 [Squeeze] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/global_average_pooling2d/Mean:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: const_axes__2019 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 [Squeeze] inputs: [StatefulPartitionedCall/model/global_average_pooling2d/Mean:0 -> (-1, 64, 1, 1)[FLOAT]], [const_axes__2019 -> (2)[INT32]], [10/04/2021-21:34:37] [V] [TRT] Original shape: (_, 64, 1, 1), squeezing to: (_, _) [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 for ONNX node: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020:0 for ONNX tensor: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 [Squeeze] outputs: [StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020:0 -> (-1, 64)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/dense/MatMul [MatMul] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/dense/MatMul/ReadVariableOp__622 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul [MatMul] inputs: [StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020:0 -> (-1, 64)[FLOAT]], [StatefulPartitionedCall/model/dense/MatMul/ReadVariableOp__622 -> (64, 10)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/dense/MatMul/ReadVariableOp__622 for ONNX node: StatefulPartitionedCall/model/dense/MatMul/ReadVariableOp__622 [10/04/2021-21:34:37] [V] [TRT] GEMM: using FC layer instead of MM because all criteria were met. [10/04/2021-21:34:37] [V] [TRT] Original shape: (_, 64), unsqueezing to: (_, _, _, _) [10/04/2021-21:34:37] [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/dense/MatMul for ONNX node: StatefulPartitionedCall/model/dense/MatMul [10/04/2021-21:34:37] [V] [TRT] Original shape: (_, 10, 1, 1), squeezing to: (_, _) [10/04/2021-21:34:37] [V] [TRT] Registering tensor: StatefulPartitionedCall/model/dense/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model/dense/MatMul:0 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul [MatMul] outputs: [StatefulPartitionedCall/model/dense/MatMul:0 -> (-1, 10)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Parsing node: StatefulPartitionedCall/model/dense/BiasAdd [Add] [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/dense/MatMul:0 [10/04/2021-21:34:37] [V] [TRT] Searching for input: StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model/dense/MatMul:0 -> (-1, 10)[FLOAT]], [StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 -> (10)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 for ONNX node: StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 [10/04/2021-21:34:37] [V] [TRT] Registering layer: StatefulPartitionedCall/model/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Registering tensor: dense_0 for ONNX tensor: dense [10/04/2021-21:34:37] [V] [TRT] StatefulPartitionedCall/model/dense/BiasAdd [Add] outputs: [dense -> (-1, 10)[FLOAT]], [10/04/2021-21:34:37] [V] [TRT] Marking dense_0 as output: dense [10/04/2021-21:34:37] [I] Finish parsing network model [10/04/2021-21:34:37] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 9224, GPU 977 (MiB) [10/04/2021-21:34:37] [10/04/2021-21:34:37] [I] [TRT] [MemUsageSnapshot] Builder begin: CPU 9224 MiB, GPU 979 MiB [10/04/2021-21:34:37] [10/04/2021-21:34:37] [10/04/2021-21:34:37] [V] [TRT] Applying generic optimizations to the graph for inference. [10/04/2021-21:34:37] [V] [TRT] Original: 1105 layers [10/04/2021-21:34:37] [V] [TRT] After dead-layer removal: 1105 layers [10/04/2021-21:34:37] [V] [TRT] QDQ graph optimizer - constant folding of Q/DQ initializers [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 2) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 10) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 9) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 18) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 17) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 26) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 25) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 34) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 33) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 42) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 41) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 50) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 49) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 58) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 57) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 66) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 65) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 74) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 73) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 82) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 81) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 90) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 89) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 98) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 97) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 106) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 105) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 114) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 113) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 122) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 121) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 130) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 129) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 138) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 137) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 146) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 145) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 153) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 152) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 161) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 160) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 169) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 168) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 177) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 176) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 185) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 184) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 193) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 192) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 201) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 200) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 209) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 208) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 217) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 216) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 225) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 224) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 232) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 231) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 238) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 237) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 244) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 243) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 250) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 249) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 256) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 255) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 262) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 261) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 268) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 267) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 274) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 273) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 280) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 279) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 286) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 285) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 292) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 291) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 298) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 297) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 304) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 303) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 310) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 309) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 316) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 315) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 322) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 321) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 328) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 327) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 334) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 333) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 340) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 339) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 802) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 801) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 808) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 807) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 814) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 813) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 820) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 819) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 826) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 825) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 832) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 831) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 838) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 837) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 844) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 843) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 850) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 849) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 856) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 855) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 349) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 348) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 352) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 351) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 358) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 357) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 374) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 373) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 377) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 376) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 383) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 382) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 399) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 398) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 402) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 401) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 408) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 407) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 424) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 423) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 427) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 426) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 433) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 432) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 449) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 448) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 452) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 451) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 458) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 457) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 474) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 473) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 477) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 476) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 483) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 482) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 499) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 498) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 502) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 501) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 508) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 507) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 524) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 523) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 527) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 526) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 533) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 532) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 549) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 548) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 552) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 551) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 558) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 557) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 574) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 573) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 577) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 576) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 598) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 597) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 601) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 600) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 586) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 585) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 606) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 605) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 609) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 608) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 615) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 614) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 631) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 630) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 634) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 633) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 640) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 639) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 656) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 655) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 659) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 658) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 665) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 664) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 681) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 680) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 684) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 683) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 690) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 689) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 706) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 705) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 709) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 708) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 715) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 714) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 731) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 730) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 734) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 733) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 740) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 739) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 756) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 755) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 759) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 758) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 765) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 764) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 781) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 780) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 784) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 783) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 790) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 789) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 866) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 865) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 869) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 868) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 890) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 889) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 893) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 892) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 878) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 877) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 898) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 897) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 901) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 900) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 907) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 906) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 923) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 922) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 926) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 925) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 932) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 931) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 948) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 947) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 951) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 950) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 957) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 956) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 973) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 972) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 976) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 975) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 982) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 981) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 998) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 997) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1001) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1000) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1007) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1006) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1023) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1022) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1026) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1025) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1032) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1031) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1048) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1047) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1051) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1050) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1057) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1056) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1073) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1072) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1076) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1075) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1082) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1081) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 6) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 5) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 14) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 13) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 22) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 21) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 30) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 29) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 38) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 37) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 46) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 45) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 54) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 53) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 62) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 61) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 70) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 69) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 78) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 77) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 86) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 85) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 94) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 93) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 102) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 101) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 110) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 109) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 118) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 117) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 126) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 125) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 134) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 133) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 142) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 141) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 149) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 148) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 157) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 156) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 165) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 164) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 173) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 172) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 181) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 180) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 189) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 188) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 197) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 196) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 205) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 204) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 213) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 212) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 221) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 220) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 229) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 228) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 235) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 234) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 241) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 240) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 247) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 246) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 253) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 252) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 259) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 258) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 265) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 264) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 271) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 270) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 277) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 276) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 283) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 282) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 289) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 288) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 295) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 294) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 301) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 300) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 307) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 306) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 313) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 312) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 319) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 318) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 325) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 324) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 331) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 330) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 337) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 336) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 343) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 342) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 805) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 804) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 811) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 810) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 817) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 816) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 823) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 822) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 829) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 828) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 835) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 834) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 841) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 840) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 847) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 846) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 853) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 852) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 859) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 858) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 366) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 365) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 369) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 368) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 361) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 360) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 391) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 390) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 394) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 393) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 386) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 385) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 416) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 415) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 419) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 418) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 411) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 410) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 441) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 440) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 444) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 443) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 436) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 435) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 466) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 465) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 469) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 468) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 461) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 460) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 491) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 490) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 494) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 493) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 486) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 485) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 516) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 515) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 519) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 518) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 511) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 510) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 541) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 540) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 544) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 543) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 536) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 535) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 566) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 565) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 569) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 568) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 561) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 560) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 591) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 590) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 594) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 593) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 583) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 582) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 623) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 622) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 626) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 625) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 618) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 617) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 648) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 647) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 651) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 650) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 643) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 642) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 673) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 672) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 676) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 675) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 668) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 667) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 698) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 697) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 701) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 700) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 693) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 692) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 723) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 722) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 726) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 725) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 718) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 717) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 748) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 747) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 751) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 750) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 743) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 742) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 773) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 772) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 776) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 775) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 768) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 767) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 796) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 795) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 799) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 798) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 793) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 792) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 883) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 882) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 886) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 885) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 875) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 874) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 915) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 914) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 918) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 917) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 910) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 909) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 940) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 939) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 943) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 942) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 935) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 934) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 965) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 964) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 968) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 967) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 960) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 959) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 990) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 989) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 993) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 992) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 985) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 984) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1015) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1014) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1018) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1017) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1010) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1009) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1040) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1039) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1043) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1042) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1035) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1034) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1065) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1064) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1068) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1067) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1060) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1059) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1090) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1089) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1093) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1092) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1085) [Constant] [10/04/2021-21:34:37] [V] [TRT] Removing (Unnamed Layer* 1084) [Constant] [10/04/2021-21:34:37] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 with (Unnamed Layer* 1108) [Shuffle] [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/global_average_pooling2d/Mean_Squeeze__2020 + (Unnamed Layer* 1108) [Shuffle] [10/04/2021-21:34:37] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 with (Unnamed Layer* 1115) [Shuffle] [10/04/2021-21:34:37] [V] [TRT] After Myelin optimization: 538 layers [10/04/2021-21:34:37] [V] [TRT] Convert layer type of StatefulPartitionedCall/model/dense/MatMul from FULLY_CONNECTED to CONVOLUTION [10/04/2021-21:34:37] [V] [TRT] Removing shuffle_between_StatefulPartitionedCall/model/global_average_pooling2d/Mean:0_and_StatefulPartitionedCall/model/dense/MatMul [10/04/2021-21:34:37] [V] [TRT] After scale fusion: 538 layers [10/04/2021-21:34:37] [V] [TRT] QDQ graph optimizer - constant folding of Q/DQ initializers [10/04/2021-21:34:37] [V] [TRT] QDQ graph optimizer forward pass - DQ motions and fusions [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add/add with StatefulPartitionedCall/model/activation_2/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_1/add with StatefulPartitionedCall/model/activation_4/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_2/add with StatefulPartitionedCall/model/activation_6/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_3/add with StatefulPartitionedCall/model/activation_8/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_4/add with StatefulPartitionedCall/model/activation_10/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_5/add with StatefulPartitionedCall/model/activation_12/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_6/add with StatefulPartitionedCall/model/activation_14/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_7/add with StatefulPartitionedCall/model/activation_16/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_8/add with StatefulPartitionedCall/model/activation_18/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_9/add with StatefulPartitionedCall/model/activation_20/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_10/add with StatefulPartitionedCall/model/activation_22/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_11/add with StatefulPartitionedCall/model/activation_24/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_12/add with StatefulPartitionedCall/model/activation_26/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_13/add with StatefulPartitionedCall/model/activation_28/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_14/add with StatefulPartitionedCall/model/activation_30/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_15/add with StatefulPartitionedCall/model/activation_32/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_16/add with StatefulPartitionedCall/model/activation_34/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_17/add with StatefulPartitionedCall/model/activation_36/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_18/add with StatefulPartitionedCall/model/activation_38/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_19/add with StatefulPartitionedCall/model/activation_40/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_20/add with StatefulPartitionedCall/model/activation_42/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_21/add with StatefulPartitionedCall/model/activation_44/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_22/add with StatefulPartitionedCall/model/activation_46/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_23/add with StatefulPartitionedCall/model/activation_48/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_24/add with StatefulPartitionedCall/model/activation_50/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_25/add with StatefulPartitionedCall/model/activation_52/Relu [10/04/2021-21:34:37] [V] [TRT] EltReluFusion: Fusing StatefulPartitionedCall/model/add_26/add with StatefulPartitionedCall/model/activation_54/Relu [10/04/2021-21:34:37] [V] [TRT] Swap the layer type of StatefulPartitionedCall/model/global_average_pooling2d/Mean from REDUCE to POOLING [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 with QuantLinearNode__992_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 with QuantLinearNode__980_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 with QuantLinearNode__972_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 with QuantLinearNode__960_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 with QuantLinearNode__952_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 with QuantLinearNode__940_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 with QuantLinearNode__932_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 with QuantLinearNode__920_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 with QuantLinearNode__912_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 with QuantLinearNode__900_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 with QuantLinearNode__892_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 with QuantLinearNode__880_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 with QuantLinearNode__872_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 with QuantLinearNode__860_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 with QuantLinearNode__852_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 with QuantLinearNode__840_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 with QuantLinearNode__832_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 with QuantLinearNode__820_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 with QuantLinearNode__812_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 with QuantLinearNode__804_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 with QuantLinearNode__792_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 with QuantLinearNode__784_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 with QuantLinearNode__772_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 with QuantLinearNode__764_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 with QuantLinearNode__752_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 with QuantLinearNode__744_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 with QuantLinearNode__732_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 with QuantLinearNode__724_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 with QuantLinearNode__712_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 with QuantLinearNode__704_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 with QuantLinearNode__692_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 with QuantLinearNode__684_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 with QuantLinearNode__672_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 with QuantLinearNode__664_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 with QuantLinearNode__652_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 with QuantLinearNode__644_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d/transpose__8 with QuantLinearNode__632_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 with QuantLinearNode__1188_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 with QuantLinearNode__1180_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 with QuantLinearNode__1168_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 with QuantLinearNode__1160_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 with QuantLinearNode__1148_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 with QuantLinearNode__1140_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 with QuantLinearNode__1128_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 with QuantLinearNode__1120_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 with QuantLinearNode__1108_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 with QuantLinearNode__1100_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 with QuantLinearNode__1088_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 with QuantLinearNode__1080_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 with QuantLinearNode__1068_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 with QuantLinearNode__1060_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 with QuantLinearNode__1048_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 with QuantLinearNode__1040_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 with QuantLinearNode__1028_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 with QuantLinearNode__1020_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 with QuantLinearNode__1008_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsQuantizeFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 with QuantLinearNode__1000_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_1/Relu with QuantLinearNode__648_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_3/Relu with QuantLinearNode__668_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_5/Relu with QuantLinearNode__688_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_7/Relu with QuantLinearNode__708_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_9/Relu with QuantLinearNode__728_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_11/Relu with QuantLinearNode__748_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_13/Relu with QuantLinearNode__768_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_15/Relu with QuantLinearNode__788_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_17/Relu with QuantLinearNode__808_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_19/Relu with QuantLinearNode__836_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_21/Relu with QuantLinearNode__856_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_23/Relu with QuantLinearNode__876_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_25/Relu with QuantLinearNode__896_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_27/Relu with QuantLinearNode__916_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_29/Relu with QuantLinearNode__936_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_31/Relu with QuantLinearNode__956_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_33/Relu with QuantLinearNode__976_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_35/Relu with QuantLinearNode__996_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_37/Relu with QuantLinearNode__1024_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_39/Relu with QuantLinearNode__1044_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_41/Relu with QuantLinearNode__1064_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_43/Relu with QuantLinearNode__1084_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_45/Relu with QuantLinearNode__1104_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_47/Relu with QuantLinearNode__1124_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_49/Relu with QuantLinearNode__1144_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_51/Relu with QuantLinearNode__1164_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation_53/Relu with QuantLinearNode__1184_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__636_quantize_scale_node which duplicates (Q) QuantLinearNode__640_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__636_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__656_quantize_scale_node which duplicates (Q) QuantLinearNode__660_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__656_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__676_quantize_scale_node which duplicates (Q) QuantLinearNode__680_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__676_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__696_quantize_scale_node which duplicates (Q) QuantLinearNode__700_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__696_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__716_quantize_scale_node which duplicates (Q) QuantLinearNode__720_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__716_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__736_quantize_scale_node which duplicates (Q) QuantLinearNode__740_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__736_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__756_quantize_scale_node which duplicates (Q) QuantLinearNode__760_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__756_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__776_quantize_scale_node which duplicates (Q) QuantLinearNode__780_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__776_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__796_quantize_scale_node which duplicates (Q) QuantLinearNode__800_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__796_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__816_quantize_scale_node which duplicates (Q) QuantLinearNode__828_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__816_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__844_quantize_scale_node which duplicates (Q) QuantLinearNode__848_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__844_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__864_quantize_scale_node which duplicates (Q) QuantLinearNode__868_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__864_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__884_quantize_scale_node which duplicates (Q) QuantLinearNode__888_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__884_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__904_quantize_scale_node which duplicates (Q) QuantLinearNode__908_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__904_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__924_quantize_scale_node which duplicates (Q) QuantLinearNode__928_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__924_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__944_quantize_scale_node which duplicates (Q) QuantLinearNode__948_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__944_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__964_quantize_scale_node which duplicates (Q) QuantLinearNode__968_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__964_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__984_quantize_scale_node which duplicates (Q) QuantLinearNode__988_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__984_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__1004_quantize_scale_node which duplicates (Q) QuantLinearNode__1016_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1004_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__1032_quantize_scale_node which duplicates (Q) QuantLinearNode__1036_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1032_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__1052_quantize_scale_node which duplicates (Q) QuantLinearNode__1056_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1052_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__1072_quantize_scale_node which duplicates (Q) QuantLinearNode__1076_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1072_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__1092_quantize_scale_node which duplicates (Q) QuantLinearNode__1096_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1092_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__1112_quantize_scale_node which duplicates (Q) QuantLinearNode__1116_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1112_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__1132_quantize_scale_node which duplicates (Q) QuantLinearNode__1136_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1132_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__1152_quantize_scale_node which duplicates (Q) QuantLinearNode__1156_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1152_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Eliminating QuantLinearNode__1172_quantize_scale_node which duplicates (Q) QuantLinearNode__1176_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1172_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/activation/Relu with QuantLinearNode__640_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QDQ graph optimizer quantization pass - Generate quantized ops [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_1/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_2/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_3/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_4/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_5/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_6/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_7/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_8/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_9/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_10/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_11/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_12/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_13/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_14/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_15/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_16/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_17/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_18/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_19/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_20/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_21/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_22/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_23/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_24/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_25/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_26/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_27/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_28/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_29/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_30/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_31/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_32/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_33/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_34/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_35/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_36/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_37/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_38/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_39/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_40/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_41/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_42/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_43/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_44/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_45/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_46/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_47/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_48/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_49/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_50/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_51/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_52/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_53/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Removing StatefulPartitionedCall/model/batch_normalization_54/FusedBatchNormV3 [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu with QuantLinearNode__660_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__660_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__649_quantize_scale_node and DequantLinearNode__653_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__660_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__649_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__653_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd with StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__637_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_1/add + StatefulPartitionedCall/model/activation_4/Relu with QuantLinearNode__680_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__680_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__669_quantize_scale_node and DequantLinearNode__673_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__680_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__669_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__673_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 + QuantLinearNode__672_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 + QuantLinearNode__672_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd with StatefulPartitionedCall/model/add_1/add + StatefulPartitionedCall/model/activation_4/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__657_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_2/add + StatefulPartitionedCall/model/activation_6/Relu with QuantLinearNode__700_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__700_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__689_quantize_scale_node and DequantLinearNode__693_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__700_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__689_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__693_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 + QuantLinearNode__692_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 + QuantLinearNode__692_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd with StatefulPartitionedCall/model/add_2/add + StatefulPartitionedCall/model/activation_6/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__677_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_3/add + StatefulPartitionedCall/model/activation_8/Relu with QuantLinearNode__720_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__720_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__709_quantize_scale_node and DequantLinearNode__713_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__720_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__709_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__713_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 + QuantLinearNode__712_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 + QuantLinearNode__712_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd with StatefulPartitionedCall/model/add_3/add + StatefulPartitionedCall/model/activation_8/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__697_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_4/add + StatefulPartitionedCall/model/activation_10/Relu with QuantLinearNode__740_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__740_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__729_quantize_scale_node and DequantLinearNode__733_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__740_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__729_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__733_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 + QuantLinearNode__732_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 + QuantLinearNode__732_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd with StatefulPartitionedCall/model/add_4/add + StatefulPartitionedCall/model/activation_10/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__717_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_5/add + StatefulPartitionedCall/model/activation_12/Relu with QuantLinearNode__760_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__760_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__749_quantize_scale_node and DequantLinearNode__753_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__760_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__749_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__753_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 + QuantLinearNode__752_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 + QuantLinearNode__752_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd with StatefulPartitionedCall/model/add_5/add + StatefulPartitionedCall/model/activation_12/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__737_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_6/add + StatefulPartitionedCall/model/activation_14/Relu with QuantLinearNode__780_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__780_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__769_quantize_scale_node and DequantLinearNode__773_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__780_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__769_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__773_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 + QuantLinearNode__772_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 + QuantLinearNode__772_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd with StatefulPartitionedCall/model/add_6/add + StatefulPartitionedCall/model/activation_14/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__757_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_7/add + StatefulPartitionedCall/model/activation_16/Relu with QuantLinearNode__800_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__800_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__789_quantize_scale_node and DequantLinearNode__793_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__800_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__789_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__793_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 + QuantLinearNode__792_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 + QuantLinearNode__792_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd with StatefulPartitionedCall/model/add_7/add + StatefulPartitionedCall/model/activation_16/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__777_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_8/add + StatefulPartitionedCall/model/activation_18/Relu with QuantLinearNode__828_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__828_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__809_quantize_scale_node and DequantLinearNode__813_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__828_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__809_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__813_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 + QuantLinearNode__812_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 + QuantLinearNode__812_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd with StatefulPartitionedCall/model/add_8/add + StatefulPartitionedCall/model/activation_18/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__797_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu with QuantLinearNode__848_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__848_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__837_quantize_scale_node and DequantLinearNode__841_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__848_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__837_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__841_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd with StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__825_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_10/add + StatefulPartitionedCall/model/activation_22/Relu with QuantLinearNode__868_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__868_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__857_quantize_scale_node and DequantLinearNode__861_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__868_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__857_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__861_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 + QuantLinearNode__860_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 + QuantLinearNode__860_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd with StatefulPartitionedCall/model/add_10/add + StatefulPartitionedCall/model/activation_22/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__845_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_11/add + StatefulPartitionedCall/model/activation_24/Relu with QuantLinearNode__888_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__888_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__877_quantize_scale_node and DequantLinearNode__881_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__888_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__877_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__881_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 + QuantLinearNode__880_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 + QuantLinearNode__880_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd with StatefulPartitionedCall/model/add_11/add + StatefulPartitionedCall/model/activation_24/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__865_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_12/add + StatefulPartitionedCall/model/activation_26/Relu with QuantLinearNode__908_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__908_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__897_quantize_scale_node and DequantLinearNode__901_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__908_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__897_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__901_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 + QuantLinearNode__900_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 + QuantLinearNode__900_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd with StatefulPartitionedCall/model/add_12/add + StatefulPartitionedCall/model/activation_26/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__885_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_13/add + StatefulPartitionedCall/model/activation_28/Relu with QuantLinearNode__928_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__928_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__917_quantize_scale_node and DequantLinearNode__921_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__928_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__917_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__921_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 + QuantLinearNode__920_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 + QuantLinearNode__920_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd with StatefulPartitionedCall/model/add_13/add + StatefulPartitionedCall/model/activation_28/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__905_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_14/add + StatefulPartitionedCall/model/activation_30/Relu with QuantLinearNode__948_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__948_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__937_quantize_scale_node and DequantLinearNode__941_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__948_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__937_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__941_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 + QuantLinearNode__940_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 + QuantLinearNode__940_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd with StatefulPartitionedCall/model/add_14/add + StatefulPartitionedCall/model/activation_30/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__925_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_15/add + StatefulPartitionedCall/model/activation_32/Relu with QuantLinearNode__968_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__968_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__957_quantize_scale_node and DequantLinearNode__961_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__968_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__957_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__961_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 + QuantLinearNode__960_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 + QuantLinearNode__960_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd with StatefulPartitionedCall/model/add_15/add + StatefulPartitionedCall/model/activation_32/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__945_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_16/add + StatefulPartitionedCall/model/activation_34/Relu with QuantLinearNode__988_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__988_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__977_quantize_scale_node and DequantLinearNode__981_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__988_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__977_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__981_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 + QuantLinearNode__980_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 + QuantLinearNode__980_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd with StatefulPartitionedCall/model/add_16/add + StatefulPartitionedCall/model/activation_34/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__965_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_17/add + StatefulPartitionedCall/model/activation_36/Relu with QuantLinearNode__1016_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1016_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__997_quantize_scale_node and DequantLinearNode__1001_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1016_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__997_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1001_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 + QuantLinearNode__1000_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 + QuantLinearNode__1000_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd with StatefulPartitionedCall/model/add_17/add + StatefulPartitionedCall/model/activation_36/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__985_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu with QuantLinearNode__1036_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1036_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1025_quantize_scale_node and DequantLinearNode__1029_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1036_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1025_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1029_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd with StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1013_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_19/add + StatefulPartitionedCall/model/activation_40/Relu with QuantLinearNode__1056_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1056_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1045_quantize_scale_node and DequantLinearNode__1049_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1056_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1045_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1049_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 + QuantLinearNode__1048_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 + QuantLinearNode__1048_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd with StatefulPartitionedCall/model/add_19/add + StatefulPartitionedCall/model/activation_40/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1033_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_20/add + StatefulPartitionedCall/model/activation_42/Relu with QuantLinearNode__1076_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1076_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1065_quantize_scale_node and DequantLinearNode__1069_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1076_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1065_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1069_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 + QuantLinearNode__1068_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 + QuantLinearNode__1068_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd with StatefulPartitionedCall/model/add_20/add + StatefulPartitionedCall/model/activation_42/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1053_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_21/add + StatefulPartitionedCall/model/activation_44/Relu with QuantLinearNode__1096_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1096_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1085_quantize_scale_node and DequantLinearNode__1089_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1096_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1085_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1089_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 + QuantLinearNode__1088_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 + QuantLinearNode__1088_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd with StatefulPartitionedCall/model/add_21/add + StatefulPartitionedCall/model/activation_44/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1073_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_22/add + StatefulPartitionedCall/model/activation_46/Relu with QuantLinearNode__1116_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1116_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1105_quantize_scale_node and DequantLinearNode__1109_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1116_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1105_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1109_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 + QuantLinearNode__1108_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 + QuantLinearNode__1108_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd with StatefulPartitionedCall/model/add_22/add + StatefulPartitionedCall/model/activation_46/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1093_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_23/add + StatefulPartitionedCall/model/activation_48/Relu with QuantLinearNode__1136_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1136_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1125_quantize_scale_node and DequantLinearNode__1129_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1136_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1125_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1129_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 + QuantLinearNode__1128_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 + QuantLinearNode__1128_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd with StatefulPartitionedCall/model/add_23/add + StatefulPartitionedCall/model/activation_48/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1113_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_24/add + StatefulPartitionedCall/model/activation_50/Relu with QuantLinearNode__1156_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1156_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1145_quantize_scale_node and DequantLinearNode__1149_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1156_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1145_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1149_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 + QuantLinearNode__1148_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 + QuantLinearNode__1148_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd with StatefulPartitionedCall/model/add_24/add + StatefulPartitionedCall/model/activation_50/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1133_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Swapping StatefulPartitionedCall/model/add_25/add + StatefulPartitionedCall/model/activation_52/Relu with QuantLinearNode__1176_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1176_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1165_quantize_scale_node and DequantLinearNode__1169_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1176_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1165_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1169_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 + QuantLinearNode__1168_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 + QuantLinearNode__1168_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd with StatefulPartitionedCall/model/add_25/add + StatefulPartitionedCall/model/activation_52/Relu [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1153_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__640_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__629_quantize_scale_node and DequantLinearNode__633_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__640_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__629_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__633_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__648_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__641_quantize_scale_node and DequantLinearNode__645_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__648_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__641_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__645_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__668_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__661_quantize_scale_node and DequantLinearNode__665_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__668_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__661_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__665_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__688_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__681_quantize_scale_node and DequantLinearNode__685_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__688_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__681_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__685_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__708_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__701_quantize_scale_node and DequantLinearNode__705_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__708_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__701_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__705_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__728_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__721_quantize_scale_node and DequantLinearNode__725_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__728_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__721_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__725_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__748_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__741_quantize_scale_node and DequantLinearNode__745_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__748_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__741_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__745_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__768_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__761_quantize_scale_node and DequantLinearNode__765_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__768_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__761_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__765_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__788_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__781_quantize_scale_node and DequantLinearNode__785_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__788_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__781_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__785_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__808_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__801_quantize_scale_node and DequantLinearNode__805_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__808_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__801_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__805_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__836_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__829_quantize_scale_node and DequantLinearNode__833_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__836_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__829_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__833_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__856_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__849_quantize_scale_node and DequantLinearNode__853_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__856_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__849_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__853_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__876_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__869_quantize_scale_node and DequantLinearNode__873_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__876_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__869_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__873_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__896_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__889_quantize_scale_node and DequantLinearNode__893_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__896_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__889_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__893_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__916_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__909_quantize_scale_node and DequantLinearNode__913_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__916_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__909_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__913_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__936_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__929_quantize_scale_node and DequantLinearNode__933_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__936_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__929_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__933_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__956_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__949_quantize_scale_node and DequantLinearNode__953_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__956_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__949_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__953_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__976_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__969_quantize_scale_node and DequantLinearNode__973_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__976_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__969_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__973_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__996_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__989_quantize_scale_node and DequantLinearNode__993_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__996_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__989_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__993_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1024_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1017_quantize_scale_node and DequantLinearNode__1021_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1024_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1017_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1021_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1044_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1037_quantize_scale_node and DequantLinearNode__1041_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1044_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1037_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1041_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1064_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1057_quantize_scale_node and DequantLinearNode__1061_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1064_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1057_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1061_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1084_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1077_quantize_scale_node and DequantLinearNode__1081_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1084_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1077_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1081_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1104_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1097_quantize_scale_node and DequantLinearNode__1101_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1104_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1097_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1101_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1124_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1117_quantize_scale_node and DequantLinearNode__1121_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1124_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1117_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1121_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1144_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1137_quantize_scale_node and DequantLinearNode__1141_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1144_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1137_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1141_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1164_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1157_quantize_scale_node and DequantLinearNode__1161_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1164_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1157_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1161_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1184_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1177_quantize_scale_node and DequantLinearNode__1181_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1184_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1177_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1181_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1185_quantize_scale_node and DequantLinearNode__1189_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1185_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1189_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__824_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__817_quantize_scale_node and DequantLinearNode__821_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__824_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__817_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__821_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing QuantLinearNode__1012_quantize_scale_node into StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [10/04/2021-21:34:37] [V] [TRT] QuantizeDoubleInputNodes: fusing (DequantLinearNode__1005_quantize_scale_node and DequantLinearNode__1009_quantize_scale_node) into StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [10/04/2021-21:34:37] [V] [TRT] Removing QuantLinearNode__1012_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1005_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] Removing DequantLinearNode__1009_quantize_scale_node [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 + QuantLinearNode__992_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 + QuantLinearNode__972_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 + QuantLinearNode__952_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 + QuantLinearNode__932_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 + QuantLinearNode__912_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 + QuantLinearNode__892_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 + QuantLinearNode__872_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 + QuantLinearNode__804_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 + QuantLinearNode__784_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 + QuantLinearNode__764_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 + QuantLinearNode__744_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 + QuantLinearNode__724_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 + QuantLinearNode__704_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 + QuantLinearNode__684_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 + QuantLinearNode__664_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 + QuantLinearNode__1180_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 + QuantLinearNode__1160_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 + QuantLinearNode__1140_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 + QuantLinearNode__1120_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 + QuantLinearNode__1100_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 + QuantLinearNode__1080_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 + QuantLinearNode__1060_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstWeightsFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node with StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd with StatefulPartitionedCall/model/activation/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd with StatefulPartitionedCall/model/activation_1/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 + QuantLinearNode__664_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd with StatefulPartitionedCall/model/activation_3/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 + QuantLinearNode__684_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd with StatefulPartitionedCall/model/activation_5/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 + QuantLinearNode__704_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd with StatefulPartitionedCall/model/activation_7/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 + QuantLinearNode__724_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd with StatefulPartitionedCall/model/activation_9/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 + QuantLinearNode__744_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd with StatefulPartitionedCall/model/activation_11/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 + QuantLinearNode__764_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd with StatefulPartitionedCall/model/activation_13/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 + QuantLinearNode__784_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd with StatefulPartitionedCall/model/activation_15/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 + QuantLinearNode__804_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd with StatefulPartitionedCall/model/activation_17/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd with StatefulPartitionedCall/model/activation_19/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd with StatefulPartitionedCall/model/activation_21/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 + QuantLinearNode__872_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd with StatefulPartitionedCall/model/activation_23/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 + QuantLinearNode__892_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd with StatefulPartitionedCall/model/activation_25/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 + QuantLinearNode__912_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd with StatefulPartitionedCall/model/activation_27/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 + QuantLinearNode__932_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd with StatefulPartitionedCall/model/activation_29/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 + QuantLinearNode__952_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd with StatefulPartitionedCall/model/activation_31/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 + QuantLinearNode__972_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd with StatefulPartitionedCall/model/activation_33/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 + QuantLinearNode__992_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd with StatefulPartitionedCall/model/activation_35/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd with StatefulPartitionedCall/model/activation_37/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd with StatefulPartitionedCall/model/activation_39/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 + QuantLinearNode__1060_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd with StatefulPartitionedCall/model/activation_41/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 + QuantLinearNode__1080_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd with StatefulPartitionedCall/model/activation_43/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 + QuantLinearNode__1100_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd with StatefulPartitionedCall/model/activation_45/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 + QuantLinearNode__1120_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd with StatefulPartitionedCall/model/activation_47/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 + QuantLinearNode__1140_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd with StatefulPartitionedCall/model/activation_49/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 + QuantLinearNode__1160_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd with StatefulPartitionedCall/model/activation_51/Relu [10/04/2021-21:34:37] [V] [TRT] ConvReluFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 + QuantLinearNode__1180_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd with StatefulPartitionedCall/model/activation_53/Relu [10/04/2021-21:34:37] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd with StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu [10/04/2021-21:34:37] [V] [TRT] -----------SqueezePushDown kSQUEEZE_JOIN case: StatefulPartitionedCall/model/dense/MatMul --> (Unnamed Layer* 1113) [Shuffle] --> StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] with unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] [10/04/2021-21:34:37] [V] [TRT] ConstEltFusion: Fusing StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] with StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:37] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/model/dense/MatMul with StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:37] [V] [TRT] After vertical fusions: 62 layers [10/04/2021-21:34:37] [V] [TRT] After dupe layer removal: 62 layers [10/04/2021-21:34:37] [V] [TRT] After final dead-layer removal: 62 layers [10/04/2021-21:34:37] [V] [TRT] After tensor merging: 62 layers [10/04/2021-21:34:37] [V] [TRT] After concat removal: 62 layers [10/04/2021-21:34:37] [V] [TRT] Graph construction and optimization completed in 0.389638 seconds. [10/04/2021-21:34:38] [V] [TRT] Using cublas a tactic source [10/04/2021-21:34:38] [10/04/2021-21:34:38] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +207, GPU +74, now: CPU 9417, GPU 1053 (MiB) [10/04/2021-21:34:38] [V] [TRT] Using cuDNN as a tactic source [10/04/2021-21:34:40] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +175, GPU +78, now: CPU 9592, GPU 1131 (MiB) [10/04/2021-21:34:40] [10/04/2021-21:34:40] [10/04/2021-21:34:40] [V] [TRT] Constructing optimization profile number 0 [1/1]. [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Float(3072,1024,32,1) -> Int8(3072,1024,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: QuantLinearNode__628_quantize_scale_node (Scale) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for QuantLinearNode__628_quantize_scale_node [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.01912 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.01912 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Float(3072,1024,32,1) -> Int8(1024,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: QuantLinearNode__628_quantize_scale_node (Scale) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for QuantLinearNode__628_quantize_scale_node [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.00864 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.00864 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Int8(3072,1024,32,1) -> Int8(1024,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 1002 Time: 0.007896 [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.004036 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.004036 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (FusedConvActConvolution) [10/04/2021-21:34:40] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 205464780613168514 [10/04/2021-21:34:40] [V] [TRT] Tactic: 205464780613168514 Time: 0.009984 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 3274338470516433987 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3274338470516433987 Time: 0.011732 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6593074063275429680 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6593074063275429680 Time: 0.012312 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 9220760393820608173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9220760393820608173 Time: 0.014624 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7665878956594041863 Time: 0.009204 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -2773311619323230421 [10/04/2021-21:34:40] [V] [TRT] Tactic: -2773311619323230421 Time: 0.010204 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -7665878956594041863 Time: 0.009204 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu (FusedConvActConvolution) [10/04/2021-21:34:40] [V] [TRT] Tactic: 524287 Time: 0.022204 [10/04/2021-21:34:40] [V] [TRT] Tactic: 720895 Time: 0.015256 [10/04/2021-21:34:40] [V] [TRT] Tactic: 983039 Time: 0.014272 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1048575 Time: 0.016 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1703935 Time: 0.014772 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1966079 Time: 0.020112 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2031615 Time: 0.025888 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2228223 Time: 0.018372 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2752511 Time: 0.015832 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2818047 Time: 0.024016 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2883583 Time: 0.021872 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3014655 Time: 0.018044 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3145727 Time: 0.02308 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3473407 Time: 0.016276 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3604479 Time: 0.014708 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3735551 Time: 0.015184 [10/04/2021-21:34:40] [V] [TRT] Tactic: 4390911 Time: 0.027456 [10/04/2021-21:34:40] [V] [TRT] Tactic: 5046271 Time: 0.012188 [10/04/2021-21:34:40] [V] [TRT] Tactic: 5963775 Time: 0.02408 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6160383 Time: 0.017736 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6488063 Time: 0.018904 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6881279 Time: 0.017804 [10/04/2021-21:34:40] [V] [TRT] Tactic: 7995391 Time: 0.014268 [10/04/2021-21:34:40] [V] [TRT] Tactic: 8585215 Time: 0.023764 [10/04/2021-21:34:40] [V] [TRT] Tactic: 8978431 Time: 0.02232 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9043967 Time: 0.013328 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9175039 Time: 0.012308 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9502719 Time: 0.027128 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9830399 Time: 0.016604 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10027007 Time: 0.014716 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10092543 Time: 0.030696 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10289151 Time: 0.019072 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10485759 Time: 0.01002 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10813439 Time: 0.012276 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 10485759 Time: 0.01002 [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 205464780613168514 [10/04/2021-21:34:40] [V] [TRT] Tactic: 205464780613168514 Time: 0.011848 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 3274338470516433987 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3274338470516433987 Time: 0.018436 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6593074063275429680 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6593074063275429680 Time: 0.013312 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 9220760393820608173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9220760393820608173 Time: 0.015924 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7665878956594041863 Time: 0.010076 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -2773311619323230421 [10/04/2021-21:34:40] [V] [TRT] Tactic: -2773311619323230421 Time: 0.011232 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -7665878956594041863 Time: 0.010076 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: FusedConvActConvolution Tactic: 10485759 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu (FusedConvActConvolution) [10/04/2021-21:34:40] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 205464780613168514 [10/04/2021-21:34:40] [V] [TRT] Tactic: 205464780613168514 Time: 0.01318 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 3274338470516433987 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3274338470516433987 Time: 0.01664 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6593074063275429680 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6593074063275429680 Time: 0.013692 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 9220760393820608173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9220760393820608173 Time: 0.01884 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7665878956594041863 Time: 0.011264 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -2773311619323230421 [10/04/2021-21:34:40] [V] [TRT] Tactic: -2773311619323230421 Time: 0.014192 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -7665878956594041863 Time: 0.011264 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1), Int8(4096,1024:4,32,1) -> Int8(4096,1024:4,32,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 205464780613168514 [10/04/2021-21:34:40] [V] [TRT] Tactic: 205464780613168514 Time: 0.011612 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 3274338470516433987 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3274338470516433987 Time: 0.01306 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6593074063275429680 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6593074063275429680 Time: 0.01324 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 9220760393820608173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9220760393820608173 Time: 0.016896 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7665878956594041863 Time: 0.00992 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -2773311619323230421 [10/04/2021-21:34:40] [V] [TRT] Tactic: -2773311619323230421 Time: 0.01216 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -7665878956594041863 Time: 0.00992 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,1024:4,32,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd (FusedConvActConvolution) [10/04/2021-21:34:40] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 205464780613168514 [10/04/2021-21:34:40] [V] [TRT] Tactic: 205464780613168514 Time: 0.009416 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_interior_nn_v1 Tactic: 251000705357475173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 251000705357475173 Time: 0.021864 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 3274338470516433987 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3274338470516433987 Time: 0.018816 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6593074063275429680 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6593074063275429680 Time: 0.019152 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 9220760393820608173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9220760393820608173 Time: 0.014072 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7665878956594041863 Time: 0.008552 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_interior_nn_v1 Tactic: -7075774695632718014 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7075774695632718014 Time: 0.019456 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_interior_nn_v1 Tactic: -4176343923564075021 [10/04/2021-21:34:40] [V] [TRT] Tactic: -4176343923564075021 Time: 0.014796 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -2773311619323230421 [10/04/2021-21:34:40] [V] [TRT] Tactic: -2773311619323230421 Time: 0.009216 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -7665878956594041863 Time: 0.008552 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu (FusedConvActConvolution) [10/04/2021-21:34:40] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 205464780613168514 [10/04/2021-21:34:40] [V] [TRT] Tactic: 205464780613168514 Time: 0.016 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 3274338470516433987 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3274338470516433987 Time: 0.019496 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6593074063275429680 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6593074063275429680 Time: 0.01842 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 9220760393820608173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9220760393820608173 Time: 0.01752 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7665878956594041863 Time: 0.013436 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -2773311619323230421 [10/04/2021-21:34:40] [V] [TRT] Tactic: -2773311619323230421 Time: 0.015716 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -7665878956594041863 Time: 0.013436 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu (FusedConvActConvolution) [10/04/2021-21:34:40] [V] [TRT] Tactic: 524287 Time: 0.026376 [10/04/2021-21:34:40] [V] [TRT] Tactic: 720895 Time: 0.015496 [10/04/2021-21:34:40] [V] [TRT] Tactic: 983039 Time: 0.010456 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1048575 Time: 0.015424 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1703935 Time: 0.010736 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1769471 Time: 0.01022 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1966079 Time: 0.030504 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2031615 Time: 0.020172 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2228223 Time: 0.019268 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2621439 Time: 0.008396 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2752511 Time: 0.021324 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2818047 Time: 0.026728 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2883583 Time: 0.036324 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3014655 Time: 0.014372 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3145727 Time: 0.012688 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3473407 Time: 0.015744 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3604479 Time: 0.027924 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3735551 Time: 0.012444 [10/04/2021-21:34:40] [V] [TRT] Tactic: 4390911 Time: 0.031924 [10/04/2021-21:34:40] [V] [TRT] Tactic: 5046271 Time: 0.012484 [10/04/2021-21:34:40] [V] [TRT] Tactic: 5963775 Time: 0.02398 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6160383 Time: 0.019652 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6488063 Time: 0.022724 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6881279 Time: 0.01922 [10/04/2021-21:34:40] [V] [TRT] Tactic: 7274495 Time: 0.016172 [10/04/2021-21:34:40] [V] [TRT] Tactic: 7864319 Time: 0.010996 [10/04/2021-21:34:40] [V] [TRT] Tactic: 7995391 Time: 0.015016 [10/04/2021-21:34:40] [V] [TRT] Tactic: 8585215 Time: 0.02288 [10/04/2021-21:34:40] [V] [TRT] Tactic: 8847359 Time: 0.00926 [10/04/2021-21:34:40] [V] [TRT] Tactic: 8978431 Time: 0.020488 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9043967 Time: 0.013236 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9175039 Time: 0.011168 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9502719 Time: 0.029212 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9830399 Time: 0.012548 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10027007 Time: 0.0142 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10092543 Time: 0.027796 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10289151 Time: 0.02182 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10485759 Time: 0.012068 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10682367 Time: 0.021008 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10813439 Time: 0.011968 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 2621439 Time: 0.008396 [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 205464780613168514 [10/04/2021-21:34:40] [V] [TRT] Tactic: 205464780613168514 Time: 0.013824 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 3274338470516433987 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3274338470516433987 Time: 0.017036 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6593074063275429680 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6593074063275429680 Time: 0.022236 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 9220760393820608173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9220760393820608173 Time: 0.014532 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7665878956594041863 Time: 0.012504 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -2773311619323230421 [10/04/2021-21:34:40] [V] [TRT] Tactic: -2773311619323230421 Time: 0.017228 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -7665878956594041863 Time: 0.012504 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: FusedConvActConvolution Tactic: 2621439 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1), Int8(2048,256:4,16,1) -> Int8(2048,256:4,16,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 205464780613168514 [10/04/2021-21:34:40] [V] [TRT] Tactic: 205464780613168514 Time: 0.013564 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 3274338470516433987 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3274338470516433987 Time: 0.020232 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6593074063275429680 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6593074063275429680 Time: 0.01638 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 9220760393820608173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9220760393820608173 Time: 0.013428 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7665878956594041863 Time: 0.012896 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -2773311619323230421 [10/04/2021-21:34:40] [V] [TRT] Tactic: -2773311619323230421 Time: 0.014416 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -7665878956594041863 Time: 0.012896 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(2048,256:4,16,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd (FusedConvActConvolution) [10/04/2021-21:34:40] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 205464780613168514 [10/04/2021-21:34:40] [V] [TRT] Tactic: 205464780613168514 Time: 0.009344 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_interior_nn_v1 Tactic: 251000705357475173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 251000705357475173 Time: 0.015212 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 3274338470516433987 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3274338470516433987 Time: 0.0098 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6593074063275429680 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6593074063275429680 Time: 0.017632 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 9220760393820608173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9220760393820608173 Time: 0.016528 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7665878956594041863 Time: 0.00768 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_interior_nn_v1 Tactic: -7075774695632718014 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7075774695632718014 Time: 0.016724 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_interior_nn_v1 Tactic: -4176343923564075021 [10/04/2021-21:34:40] [V] [TRT] Tactic: -4176343923564075021 Time: 0.016504 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -2773311619323230421 [10/04/2021-21:34:40] [V] [TRT] Tactic: -2773311619323230421 Time: 0.009832 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -7665878956594041863 Time: 0.00768 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu (FusedConvActConvolution) [10/04/2021-21:34:40] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 205464780613168514 [10/04/2021-21:34:40] [V] [TRT] Tactic: 205464780613168514 Time: 0.020352 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 3274338470516433987 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3274338470516433987 Time: 0.025972 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6593074063275429680 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6593074063275429680 Time: 0.024764 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 9220760393820608173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9220760393820608173 Time: 0.021388 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7665878956594041863 Time: 0.018432 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -2773311619323230421 [10/04/2021-21:34:40] [V] [TRT] Tactic: -2773311619323230421 Time: 0.023024 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -7665878956594041863 Time: 0.018432 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu (FusedConvActConvolution) [10/04/2021-21:34:40] [V] [TRT] Tactic: 524287 Time: 0.029352 [10/04/2021-21:34:40] [V] [TRT] Tactic: 720895 Time: 0.026124 [10/04/2021-21:34:40] [V] [TRT] Tactic: 983039 Time: 0.018672 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1048575 Time: 0.018428 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1703935 Time: 0.014424 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1769471 Time: 0.012724 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1966079 Time: 0.028664 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2031615 Time: 0.026452 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2228223 Time: 0.02524 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2424831 Time: 0.012212 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2621439 Time: 0.010992 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2752511 Time: 0.0217 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2818047 Time: 0.050608 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2883583 Time: 0.035908 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3014655 Time: 0.016508 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3145727 Time: 0.017356 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3473407 Time: 0.027676 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3604479 Time: 0.013172 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3735551 Time: 0.016932 [10/04/2021-21:34:40] [V] [TRT] Tactic: 4390911 Time: 0.035236 [10/04/2021-21:34:40] [V] [TRT] Tactic: 5046271 Time: 0.015992 [10/04/2021-21:34:40] [V] [TRT] Tactic: 5963775 Time: 0.030412 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6160383 Time: 0.024196 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6488063 Time: 0.023516 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6881279 Time: 0.023144 [10/04/2021-21:34:40] [V] [TRT] Tactic: 7274495 Time: 0.011192 [10/04/2021-21:34:40] [V] [TRT] Tactic: 7864319 Time: 0.012868 [10/04/2021-21:34:40] [V] [TRT] Tactic: 7995391 Time: 0.019456 [10/04/2021-21:34:40] [V] [TRT] Tactic: 8585215 Time: 0.033764 [10/04/2021-21:34:40] [V] [TRT] Tactic: 8847359 Time: 0.01178 [10/04/2021-21:34:40] [V] [TRT] Tactic: 8978431 Time: 0.025796 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9043967 Time: 0.013676 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9175039 Time: 0.013264 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9502719 Time: 0.044216 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9830399 Time: 0.015368 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9961471 Time: 0.01168 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10027007 Time: 0.024088 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10092543 Time: 0.034472 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10289151 Time: 0.034404 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10485759 Time: 0.013724 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10682367 Time: 0.011388 [10/04/2021-21:34:40] [V] [TRT] Tactic: 10813439 Time: 0.012124 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 2621439 Time: 0.010992 [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 205464780613168514 [10/04/2021-21:34:40] [V] [TRT] Tactic: 205464780613168514 Time: 0.021108 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 3274338470516433987 [10/04/2021-21:34:40] [V] [TRT] Tactic: 3274338470516433987 Time: 0.025144 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: 6593074063275429680 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6593074063275429680 Time: 0.024704 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 9220760393820608173 [10/04/2021-21:34:40] [V] [TRT] Tactic: 9220760393820608173 Time: 0.02086 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:40] [V] [TRT] Tactic: -7665878956594041863 Time: 0.017536 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -2773311619323230421 [10/04/2021-21:34:40] [V] [TRT] Tactic: -2773311619323230421 Time: 0.021612 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -7665878956594041863 Time: 0.017536 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: FusedConvActConvolution Tactic: 2621439 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(4096,64,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 1002 Time: 0.009088 [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.004892 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.004892 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Int8(4096,64,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 1002 Time: 0.008276 [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.00376 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.00376 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Int8(1024,64:4,8,1) -> Int8(4096,64,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 1002 Time: 0.007996 [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.004424 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.004424 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(4096,64,8,1) -> Float(4096,64,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: DequantLinearNode__1173_quantize_scale_node (Scale) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for DequantLinearNode__1173_quantize_scale_node [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.010528 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.010528 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Float(4096,64,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: DequantLinearNode__1173_quantize_scale_node (Scale) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for DequantLinearNode__1173_quantize_scale_node [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.008152 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.008152 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Int8(4096,64,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1) -> Int8(1024,64:4,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Int8(1024,64:4,8,1), Float(4096,64,8,1) -> Float(4096,64,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu (CaskConvolution) [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: maxwell_fp32_icudnn_int8x4_128x64_relu_medium_nn_v1 Tactic: 1696797196193926297 [10/04/2021-21:34:40] [V] [TRT] Tactic: 1696797196193926297 Time: 0.024456 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: maxwell_fp32_icudnn_int8x4_128x32_relu_large_nn_v1 Tactic: 2580405879259279307 [10/04/2021-21:34:40] [V] [TRT] Tactic: 2580405879259279307 Time: 0.032396 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: maxwell_fp32_icudnn_int8x4_128x32_relu_medium_nn_v1 Tactic: 4974874275042868855 [10/04/2021-21:34:40] [V] [TRT] Tactic: 4974874275042868855 Time: 0.028544 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: maxwell_fp32_icudnn_int8x4_128x32_relu_small_nn_v1 Tactic: 6391716969403403498 [10/04/2021-21:34:40] [V] [TRT] Tactic: 6391716969403403498 Time: 0.023252 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: maxwell_fp32_icudnn_int8x4_128x128_relu_medium_nn_v1 Tactic: -6455039641494301630 [10/04/2021-21:34:40] [V] [TRT] Tactic: -6455039641494301630 Time: 0.027584 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: maxwell_fp32_icudnn_int8x4_128x128_relu_small_nn_v1 Tactic: -5622265363201188577 [10/04/2021-21:34:40] [V] [TRT] Tactic: -5622265363201188577 Time: 0.02546 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: maxwell_fp32_icudnn_int8x4_128x64_relu_large_nn_v1 Tactic: -5437014714313283152 [10/04/2021-21:34:40] [V] [TRT] Tactic: -5437014714313283152 Time: 0.0254 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: maxwell_fp32_icudnn_int8x4_128x128_relu_large_nn_v1 Tactic: -3853618168293654978 [10/04/2021-21:34:40] [V] [TRT] Tactic: -3853618168293654978 Time: 0.028672 [10/04/2021-21:34:40] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: maxwell_fp32_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -3461203036104521071 [10/04/2021-21:34:40] [V] [TRT] Tactic: -3461203036104521071 Time: 0.021604 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -3461203036104521071 Time: 0.021604 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -3461203036104521071 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Float(4096,64,8,1) -> Half(4096,64,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 1002 Time: 0.021648 [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.01376 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.01376 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Float(4096,64,8,1) -> Half(4096,64,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 1002 Time: 0.010336 [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.005672 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.005672 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Half(4096,64,8,1) -> Float(4096,64,8,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 1002 Time: 0.009396 [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.003684 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.003684 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Float(4096,64,8,1) -> Float(64,1,1,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/global_average_pooling2d/Mean (TiledPooling) [10/04/2021-21:34:40] [V] [TRT] TiledPooling has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/global_average_pooling2d/Mean (CudnnPooling) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/global_average_pooling2d/Mean [10/04/2021-21:34:40] [V] [TRT] Tactic: -1 Time: 0.009024 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -1 Time: 0.009024 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CudnnPooling Tactic: -1 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Half(4096,64,8,1) -> Half(64,1,1,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/global_average_pooling2d/Mean (TiledPooling) [10/04/2021-21:34:40] [V] [TRT] TiledPooling has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/global_average_pooling2d/Mean (CudnnPooling) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/global_average_pooling2d/Mean [10/04/2021-21:34:40] [V] [TRT] Tactic: -1 Time: 0.008532 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: -1 Time: 0.008532 [10/04/2021-21:34:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CudnnPooling Tactic: -1 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Float(64,1,1,1) -> Float(64,1,64,64) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 1002 Time: 0.012844 [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.005644 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.005644 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Float(64,1,64,64) -> Float(64,1,1,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 1002 Time: 0.005508 [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.006184 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 1002 Time: 0.005508 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Half(64,1,1,1) -> Float(64,1,1,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 1002 Time: 0.005856 [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.004912 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.004912 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning Reformat:Half(64,1,1,1) -> Float(64,1,64,64) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 1002 Time: 0.006524 [10/04/2021-21:34:40] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:40] [V] [TRT] Tactic: 0 Time: 0.00502 [10/04/2021-21:34:40] [V] [TRT] Fastest Tactic: 0 Time: 0.00502 [10/04/2021-21:34:40] [V] [TRT] *************** Autotuning format combination: Float(64,1,1,1) -> Float(10,1,1,1) *************** [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd (CudaDepthwiseConvolution) [10/04/2021-21:34:40] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd (FusedConvActConvolution) [10/04/2021-21:34:40] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd (CudnnConvolution) [10/04/2021-21:34:41] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 0 Time: 82.5147 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 1 Time: 0.0217 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 2 Time: 0.034212 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 4 Time: 0.211196 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 5 Time: 0.035692 [10/04/2021-21:34:42] [V] [TRT] Fastest Tactic: 1 Time: 0.0217 [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd (CublasConvolution) [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 0 Time: 0.306696 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 1 Time: 0.00898 [10/04/2021-21:34:42] [V] [TRT] Fastest Tactic: 1 Time: 0.00898 [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd (CaskConvolution) [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x32_relu_medium_nn_v1 Tactic: 1062367460111450758 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 1062367460111450758 Time: 0.019312 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x32_relu_interior_nn_v0 Tactic: 1698681053543049347 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 1698681053543049347 Time: 0.013952 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x128_relu_medium_nn_v1 Tactic: 4501471010995462441 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 4501471010995462441 Time: 0.017172 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x64_relu_small_nn_v1 Tactic: 5137655947464784826 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 5137655947464784826 Time: 0.013952 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x128_relu_small_nn_v0 Tactic: 5288347012147084929 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 5288347012147084929 Time: 0.02162 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x128_relu_interior_nn_v1 Tactic: 5326823351883942011 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 5326823351883942011 Time: 0.018304 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x64_relu_interior_nn_v0 Tactic: 5500448035057547314 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 5500448035057547314 Time: 0.015488 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x64_relu_medium_nn_v1 Tactic: 6645123197870846056 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 6645123197870846056 Time: 0.015488 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x32_relu_small_nn_v0 Tactic: 7144526460361122478 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 7144526460361122478 Time: 0.01378 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x128_relu_medium_nn_v0 Tactic: -8262349710178828730 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: -8262349710178828730 Time: 0.019768 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x32_relu_interior_nn_v1 Tactic: -6576203419454146580 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: -6576203419454146580 Time: 0.018176 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x32_relu_medium_nn_v0 Tactic: -4787320710726427159 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: -4787320710726427159 Time: 0.014392 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x32_relu_small_nn_v1 Tactic: -3456450830548107839 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: -3456450830548107839 Time: 0.015096 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x64_relu_medium_nn_v0 Tactic: -1218658103698133241 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: -1218658103698133241 Time: 0.015844 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x64_relu_small_nn_v0 Tactic: -836875257600482091 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: -836875257600482091 Time: 0.014524 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x128_relu_small_nn_v1 Tactic: -410470605513481746 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: -410470605513481746 Time: 0.016128 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x128_relu_interior_nn_v0 Tactic: -377491875521947884 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: -377491875521947884 Time: 0.023408 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd Set Tactic Name: maxwell_scudnn_128x64_relu_interior_nn_v1 Tactic: -37215280111360163 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: -37215280111360163 Time: 0.021376 [10/04/2021-21:34:42] [V] [TRT] Fastest Tactic: 7144526460361122478 Time: 0.01378 [10/04/2021-21:34:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CublasConvolution Tactic: 1 [10/04/2021-21:34:42] [V] [TRT] *************** Autotuning format combination: Float(64,1,64,64) -> Float(10,1,10,10) *************** [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd (CudnnConvolution) [10/04/2021-21:34:42] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd (CublasConvolution) [10/04/2021-21:34:42] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd (CaskConvolution) [10/04/2021-21:34:42] [V] [TRT] CaskConvolution has no valid tactics for this config, skipping [10/04/2021-21:34:42] [V] [TRT] *************** Autotuning Reformat:Float(10,1,1,1) -> Half(10,1,1,1) *************** [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:42] [V] [TRT] Tactic: 1002 Time: 0.007532 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:42] [V] [TRT] Tactic: 0 Time: 0.00526 [10/04/2021-21:34:42] [V] [TRT] Fastest Tactic: 0 Time: 0.00526 [10/04/2021-21:34:42] [V] [TRT] *************** Autotuning Reformat:Float(10,1,10,10) -> Float(10,1,1,1) *************** [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:42] [V] [TRT] Tactic: 1002 Time: 0.005824 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:42] [V] [TRT] Tactic: 0 Time: 0.004332 [10/04/2021-21:34:42] [V] [TRT] Fastest Tactic: 0 Time: 0.004332 [10/04/2021-21:34:42] [V] [TRT] *************** Autotuning Reformat:Float(10,1,10,10) -> Half(10,1,1,1) *************** [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:42] [V] [TRT] Tactic: 1002 Time: 0.005964 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:42] [V] [TRT] Tactic: 0 Time: 0.003348 [10/04/2021-21:34:42] [V] [TRT] Fastest Tactic: 0 Time: 0.003348 [10/04/2021-21:34:42] [V] [TRT] *************** Autotuning Reformat:Half(10,1,1,1) -> Float(10,1,1,1) *************** [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:42] [V] [TRT] Tactic: 1002 Time: 0.006064 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:42] [V] [TRT] Tactic: 0 Time: 0.0034 [10/04/2021-21:34:42] [V] [TRT] Fastest Tactic: 0 Time: 0.0034 [10/04/2021-21:34:42] [V] [TRT] *************** Autotuning format combination: Float(10,1,1,1) -> Float(10,1) *************** [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: copied_squeeze_after_StatefulPartitionedCall/model/dense/BiasAdd (Shuffle) [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for copied_squeeze_after_StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 0 Time: 0.004968 [10/04/2021-21:34:42] [V] [TRT] Fastest Tactic: 0 Time: 0.004968 [10/04/2021-21:34:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [10/04/2021-21:34:42] [V] [TRT] *************** Autotuning format combination: Half(10,1,1,1) -> Half(10,1) *************** [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: copied_squeeze_after_StatefulPartitionedCall/model/dense/BiasAdd (Shuffle) [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for copied_squeeze_after_StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:42] [V] [TRT] Tactic: 0 Time: 0.00332 [10/04/2021-21:34:42] [V] [TRT] Fastest Tactic: 0 Time: 0.00332 [10/04/2021-21:34:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [10/04/2021-21:34:42] [V] [TRT] *************** Autotuning Reformat:Half(10,1) -> Float(10,1) *************** [10/04/2021-21:34:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat (Reformat) [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:42] [V] [TRT] Tactic: 1002 Time: 0.005428 [10/04/2021-21:34:42] [V] [TRT] Setting a default quantization params because quantization data is missing for [10/04/2021-21:34:42] [V] [TRT] Tactic: 0 Time: 0.003992 [10/04/2021-21:34:42] [V] [TRT] Fastest Tactic: 0 Time: 0.003992 [10/04/2021-21:34:42] [V] [TRT] For layer StatefulPartitionedCall/model/global_average_pooling2d/Mean a non-conforming implementation was chosen than was requested i.e. requested layer computation precision and output precision types were ignored because it resulted in faster network performance. Enable strict mode to try force choose a conforming implementation. [10/04/2021-21:34:42] [V] [TRT] For layer copied_squeeze_after_StatefulPartitionedCall/model/dense/BiasAdd a non-conforming implementation was chosen than was requested i.e. requested layer computation precision and output precision types were ignored because it resulted in faster network performance. Enable strict mode to try force choose a conforming implementation. [10/04/2021-21:34:42] [V] [TRT] Formats and tactics selection completed in 2.88918 seconds. [10/04/2021-21:34:42] [V] [TRT] After reformat layers: 62 layers [10/04/2021-21:34:42] [V] [TRT] Block size 16777216 [10/04/2021-21:34:42] [V] [TRT] Block size 16384 [10/04/2021-21:34:42] [V] [TRT] Block size 16384 [10/04/2021-21:34:42] [V] [TRT] Block size 16384 [10/04/2021-21:34:42] [V] [TRT] Total Activation Memory: 16826368 [10/04/2021-21:34:42] [I] [TRT] Detected 1 inputs and 1 output network tensors. [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 + QuantLinearNode__672_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd + StatefulPartitionedCall/model/add_1/add + StatefulPartitionedCall/model/activation_4/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 + QuantLinearNode__692_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd + StatefulPartitionedCall/model/add_2/add + StatefulPartitionedCall/model/activation_6/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 + QuantLinearNode__712_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd + StatefulPartitionedCall/model/add_3/add + StatefulPartitionedCall/model/activation_8/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 + QuantLinearNode__732_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd + StatefulPartitionedCall/model/add_4/add + StatefulPartitionedCall/model/activation_10/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 + QuantLinearNode__752_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd + StatefulPartitionedCall/model/add_5/add + StatefulPartitionedCall/model/activation_12/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 + QuantLinearNode__772_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd + StatefulPartitionedCall/model/add_6/add + StatefulPartitionedCall/model/activation_14/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 + QuantLinearNode__792_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd + StatefulPartitionedCall/model/add_7/add + StatefulPartitionedCall/model/activation_16/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 + QuantLinearNode__812_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd + StatefulPartitionedCall/model/add_8/add + StatefulPartitionedCall/model/activation_18/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 + QuantLinearNode__860_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd + StatefulPartitionedCall/model/add_10/add + StatefulPartitionedCall/model/activation_22/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:42] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 + QuantLinearNode__880_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd + StatefulPartitionedCall/model/add_11/add + StatefulPartitionedCall/model/activation_24/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 + QuantLinearNode__900_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd + StatefulPartitionedCall/model/add_12/add + StatefulPartitionedCall/model/activation_26/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 + QuantLinearNode__920_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd + StatefulPartitionedCall/model/add_13/add + StatefulPartitionedCall/model/activation_28/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 + QuantLinearNode__940_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd + StatefulPartitionedCall/model/add_14/add + StatefulPartitionedCall/model/activation_30/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 + QuantLinearNode__960_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd + StatefulPartitionedCall/model/add_15/add + StatefulPartitionedCall/model/activation_32/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 + QuantLinearNode__980_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd + StatefulPartitionedCall/model/add_16/add + StatefulPartitionedCall/model/activation_34/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 + QuantLinearNode__1000_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd + StatefulPartitionedCall/model/add_17/add + StatefulPartitionedCall/model/activation_36/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 + QuantLinearNode__1048_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd + StatefulPartitionedCall/model/add_19/add + StatefulPartitionedCall/model/activation_40/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 + QuantLinearNode__1068_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd + StatefulPartitionedCall/model/add_20/add + StatefulPartitionedCall/model/activation_42/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 + QuantLinearNode__1088_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd + StatefulPartitionedCall/model/add_21/add + StatefulPartitionedCall/model/activation_44/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 + QuantLinearNode__1108_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd + StatefulPartitionedCall/model/add_22/add + StatefulPartitionedCall/model/activation_46/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 + QuantLinearNode__1128_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd + StatefulPartitionedCall/model/add_23/add + StatefulPartitionedCall/model/activation_48/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 + QuantLinearNode__1148_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd + StatefulPartitionedCall/model/add_24/add + StatefulPartitionedCall/model/activation_50/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 + QuantLinearNode__1168_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd + StatefulPartitionedCall/model/add_25/add + StatefulPartitionedCall/model/activation_52/Relu Set Tactic Name: maxwell_int8x4_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -7665878956594041863 [10/04/2021-21:34:43] [V] [TRT] StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu Set Tactic Name: maxwell_fp32_icudnn_int8x4_128x64_relu_small_nn_v1 Tactic: -3461203036104521071 [10/04/2021-21:34:43] [V] [TRT] Setting a default quantization params because quantization data is missing for QuantLinearNode__628_quantize_scale_node [10/04/2021-21:34:43] [V] [TRT] Setting a default quantization params because quantization data is missing for DequantLinearNode__1173_quantize_scale_node [10/04/2021-21:34:43] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/global_average_pooling2d/Mean [10/04/2021-21:34:43] [V] [TRT] Setting a default quantization params because quantization data is missing for StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd [10/04/2021-21:34:43] [V] [TRT] Layer: QuantLinearNode__628_quantize_scale_node HostPersistent: 0 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu HostPersistent: 1664 DevicePersistent: 8192 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu HostPersistent: 1664 DevicePersistent: 10240 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 + QuantLinearNode__664_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd + StatefulPartitionedCall/model/activation_3/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 + QuantLinearNode__672_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd + StatefulPartitionedCall/model/add_1/add + StatefulPartitionedCall/model/activation_4/Relu HostPersistent: 1664 DevicePersistent: 10240 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 + QuantLinearNode__684_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd + StatefulPartitionedCall/model/activation_5/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 + QuantLinearNode__692_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd + StatefulPartitionedCall/model/add_2/add + StatefulPartitionedCall/model/activation_6/Relu HostPersistent: 1664 DevicePersistent: 10240 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 + QuantLinearNode__704_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd + StatefulPartitionedCall/model/activation_7/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 + QuantLinearNode__712_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd + StatefulPartitionedCall/model/add_3/add + StatefulPartitionedCall/model/activation_8/Relu HostPersistent: 1664 DevicePersistent: 10240 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 + QuantLinearNode__724_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd + StatefulPartitionedCall/model/activation_9/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 + QuantLinearNode__732_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd + StatefulPartitionedCall/model/add_4/add + StatefulPartitionedCall/model/activation_10/Relu HostPersistent: 1664 DevicePersistent: 10240 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 + QuantLinearNode__744_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd + StatefulPartitionedCall/model/activation_11/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 + QuantLinearNode__752_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd + StatefulPartitionedCall/model/add_5/add + StatefulPartitionedCall/model/activation_12/Relu HostPersistent: 1664 DevicePersistent: 10240 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 + QuantLinearNode__764_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd + StatefulPartitionedCall/model/activation_13/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 + QuantLinearNode__772_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd + StatefulPartitionedCall/model/add_6/add + StatefulPartitionedCall/model/activation_14/Relu HostPersistent: 1664 DevicePersistent: 10240 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 + QuantLinearNode__784_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd + StatefulPartitionedCall/model/activation_15/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 + QuantLinearNode__792_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd + StatefulPartitionedCall/model/add_7/add + StatefulPartitionedCall/model/activation_16/Relu HostPersistent: 1664 DevicePersistent: 10240 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 + QuantLinearNode__804_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd + StatefulPartitionedCall/model/activation_17/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 + QuantLinearNode__812_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd + StatefulPartitionedCall/model/add_8/add + StatefulPartitionedCall/model/activation_18/Relu HostPersistent: 1664 DevicePersistent: 10240 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu HostPersistent: 1664 DevicePersistent: 7680 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd HostPersistent: 1664 DevicePersistent: 3584 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu HostPersistent: 1664 DevicePersistent: 12288 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 + QuantLinearNode__860_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd + StatefulPartitionedCall/model/add_10/add + StatefulPartitionedCall/model/activation_22/Relu HostPersistent: 1664 DevicePersistent: 12288 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 + QuantLinearNode__872_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd + StatefulPartitionedCall/model/activation_23/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 + QuantLinearNode__880_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd + StatefulPartitionedCall/model/add_11/add + StatefulPartitionedCall/model/activation_24/Relu HostPersistent: 1664 DevicePersistent: 12288 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 + QuantLinearNode__892_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd + StatefulPartitionedCall/model/activation_25/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 + QuantLinearNode__900_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd + StatefulPartitionedCall/model/add_12/add + StatefulPartitionedCall/model/activation_26/Relu HostPersistent: 1664 DevicePersistent: 12288 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 + QuantLinearNode__912_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd + StatefulPartitionedCall/model/activation_27/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 + QuantLinearNode__920_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd + StatefulPartitionedCall/model/add_13/add + StatefulPartitionedCall/model/activation_28/Relu HostPersistent: 1664 DevicePersistent: 12288 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 + QuantLinearNode__932_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd + StatefulPartitionedCall/model/activation_29/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 + QuantLinearNode__940_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd + StatefulPartitionedCall/model/add_14/add + StatefulPartitionedCall/model/activation_30/Relu HostPersistent: 1664 DevicePersistent: 12288 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 + QuantLinearNode__952_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd + StatefulPartitionedCall/model/activation_31/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 + QuantLinearNode__960_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd + StatefulPartitionedCall/model/add_15/add + StatefulPartitionedCall/model/activation_32/Relu HostPersistent: 1664 DevicePersistent: 12288 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 + QuantLinearNode__972_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd + StatefulPartitionedCall/model/activation_33/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 + QuantLinearNode__980_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd + StatefulPartitionedCall/model/add_16/add + StatefulPartitionedCall/model/activation_34/Relu HostPersistent: 1664 DevicePersistent: 12288 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 + QuantLinearNode__992_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd + StatefulPartitionedCall/model/activation_35/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 + QuantLinearNode__1000_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd + StatefulPartitionedCall/model/add_17/add + StatefulPartitionedCall/model/activation_36/Relu HostPersistent: 1664 DevicePersistent: 12288 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu HostPersistent: 1664 DevicePersistent: 20480 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd HostPersistent: 1664 DevicePersistent: 4096 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu HostPersistent: 1664 DevicePersistent: 38912 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 + QuantLinearNode__1048_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd + StatefulPartitionedCall/model/add_19/add + StatefulPartitionedCall/model/activation_40/Relu HostPersistent: 1664 DevicePersistent: 38912 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 + QuantLinearNode__1060_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd + StatefulPartitionedCall/model/activation_41/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 + QuantLinearNode__1068_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd + StatefulPartitionedCall/model/add_20/add + StatefulPartitionedCall/model/activation_42/Relu HostPersistent: 1664 DevicePersistent: 38912 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 + QuantLinearNode__1080_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd + StatefulPartitionedCall/model/activation_43/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 + QuantLinearNode__1088_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd + StatefulPartitionedCall/model/add_21/add + StatefulPartitionedCall/model/activation_44/Relu HostPersistent: 1664 DevicePersistent: 38912 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 + QuantLinearNode__1100_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd + StatefulPartitionedCall/model/activation_45/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 + QuantLinearNode__1108_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd + StatefulPartitionedCall/model/add_22/add + StatefulPartitionedCall/model/activation_46/Relu HostPersistent: 1664 DevicePersistent: 38912 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 + QuantLinearNode__1120_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd + StatefulPartitionedCall/model/activation_47/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 + QuantLinearNode__1128_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd + StatefulPartitionedCall/model/add_23/add + StatefulPartitionedCall/model/activation_48/Relu HostPersistent: 1664 DevicePersistent: 38912 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 + QuantLinearNode__1140_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd + StatefulPartitionedCall/model/activation_49/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 + QuantLinearNode__1148_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd + StatefulPartitionedCall/model/add_24/add + StatefulPartitionedCall/model/activation_50/Relu HostPersistent: 1664 DevicePersistent: 38912 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 + QuantLinearNode__1160_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd + StatefulPartitionedCall/model/activation_51/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 + QuantLinearNode__1168_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd + StatefulPartitionedCall/model/add_25/add + StatefulPartitionedCall/model/activation_52/Relu HostPersistent: 1664 DevicePersistent: 38912 [10/04/2021-21:34:43] [V] [TRT] Layer: DequantLinearNode__1173_quantize_scale_node HostPersistent: 0 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 + QuantLinearNode__1180_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd + StatefulPartitionedCall/model/activation_53/Relu HostPersistent: 2192 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu HostPersistent: 1664 DevicePersistent: 38912 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/global_average_pooling2d/Mean HostPersistent: 48 DevicePersistent: 0 [10/04/2021-21:34:43] [V] [TRT] Layer: StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd HostPersistent: 340 DevicePersistent: 0 [10/04/2021-21:34:43] [I] [TRT] Total Host Persistent Memory: 108448 [10/04/2021-21:34:43] [I] [TRT] Total Device Persistent Memory: 596992 [10/04/2021-21:34:43] [I] [TRT] Total Scratch Memory: 0 [10/04/2021-21:34:43] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 4 MiB, GPU 4 MiB [10/04/2021-21:34:43] [V] [TRT] Using cublas a tactic source [10/04/2021-21:34:43] [10/04/2021-21:34:43] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 9822, GPU 1199 (MiB) [10/04/2021-21:34:43] [V] [TRT] Using cuDNN as a tactic source [10/04/2021-21:34:43] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 9822, GPU 1207 (MiB) [10/04/2021-21:34:43] [10/04/2021-21:34:43] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 9822, GPU 1191 (MiB) [10/04/2021-21:34:43] [V] [TRT] Engine generation completed in 5.26014 seconds. [10/04/2021-21:34:43] [V] [TRT] Deleting timing cache: 31 entries, 46 hits [10/04/2021-21:34:43] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 9820, GPU 1175 (MiB) [10/04/2021-21:34:43] [V] [TRT] Engine Layer Information: Layer(Scale): QuantLinearNode__628_quantize_scale_node, Tactic: 0, input_1[Float(1,3,32,32)] -> QuantLinearNode__628:0[Int8(1,3,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d/transpose__8 + QuantLinearNode__632_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d/BiasAdd + StatefulPartitionedCall/model/activation/Relu, Tactic: -7665878956594041863, QuantLinearNode__628:0[Int8(1,3,32,32)] -> QuantLinearNode__640:0[Int8(1,16,32,32)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_1/transpose__20 + QuantLinearNode__644_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_1/BiasAdd + StatefulPartitionedCall/model/activation_1/Relu, Tactic: 10485759, QuantLinearNode__640:0[Int8(1,16,32,32)] -> QuantLinearNode__648:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_2/transpose__30 + QuantLinearNode__652_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_2/BiasAdd + StatefulPartitionedCall/model/add/add + StatefulPartitionedCall/model/activation_2/Relu, Tactic: -7665878956594041863, QuantLinearNode__648:0[Int8(1,16,32,32)], QuantLinearNode__640:0[Int8(1,16,32,32)] -> QuantLinearNode__660:0[Int8(1,16,32,32)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_3/transpose__42 + QuantLinearNode__664_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_3/BiasAdd + StatefulPartitionedCall/model/activation_3/Relu, Tactic: 10485759, QuantLinearNode__660:0[Int8(1,16,32,32)] -> QuantLinearNode__668:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_4/transpose__52 + QuantLinearNode__672_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_4/BiasAdd + StatefulPartitionedCall/model/add_1/add + StatefulPartitionedCall/model/activation_4/Relu, Tactic: -7665878956594041863, QuantLinearNode__668:0[Int8(1,16,32,32)], QuantLinearNode__660:0[Int8(1,16,32,32)] -> QuantLinearNode__680:0[Int8(1,16,32,32)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_5/transpose__64 + QuantLinearNode__684_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_5/BiasAdd + StatefulPartitionedCall/model/activation_5/Relu, Tactic: 10485759, QuantLinearNode__680:0[Int8(1,16,32,32)] -> QuantLinearNode__688:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_6/transpose__74 + QuantLinearNode__692_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_6/BiasAdd + StatefulPartitionedCall/model/add_2/add + StatefulPartitionedCall/model/activation_6/Relu, Tactic: -7665878956594041863, QuantLinearNode__688:0[Int8(1,16,32,32)], QuantLinearNode__680:0[Int8(1,16,32,32)] -> QuantLinearNode__700:0[Int8(1,16,32,32)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_7/transpose__86 + QuantLinearNode__704_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_7/BiasAdd + StatefulPartitionedCall/model/activation_7/Relu, Tactic: 10485759, QuantLinearNode__700:0[Int8(1,16,32,32)] -> QuantLinearNode__708:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_8/transpose__96 + QuantLinearNode__712_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_8/BiasAdd + StatefulPartitionedCall/model/add_3/add + StatefulPartitionedCall/model/activation_8/Relu, Tactic: -7665878956594041863, QuantLinearNode__708:0[Int8(1,16,32,32)], QuantLinearNode__700:0[Int8(1,16,32,32)] -> QuantLinearNode__720:0[Int8(1,16,32,32)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_9/transpose__108 + QuantLinearNode__724_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_9/BiasAdd + StatefulPartitionedCall/model/activation_9/Relu, Tactic: 10485759, QuantLinearNode__720:0[Int8(1,16,32,32)] -> QuantLinearNode__728:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_10/transpose__118 + QuantLinearNode__732_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_10/BiasAdd + StatefulPartitionedCall/model/add_4/add + StatefulPartitionedCall/model/activation_10/Relu, Tactic: -7665878956594041863, QuantLinearNode__728:0[Int8(1,16,32,32)], QuantLinearNode__720:0[Int8(1,16,32,32)] -> QuantLinearNode__740:0[Int8(1,16,32,32)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_11/transpose__130 + QuantLinearNode__744_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_11/BiasAdd + StatefulPartitionedCall/model/activation_11/Relu, Tactic: 10485759, QuantLinearNode__740:0[Int8(1,16,32,32)] -> QuantLinearNode__748:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_12/transpose__140 + QuantLinearNode__752_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_12/BiasAdd + StatefulPartitionedCall/model/add_5/add + StatefulPartitionedCall/model/activation_12/Relu, Tactic: -7665878956594041863, QuantLinearNode__748:0[Int8(1,16,32,32)], QuantLinearNode__740:0[Int8(1,16,32,32)] -> QuantLinearNode__760:0[Int8(1,16,32,32)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_13/transpose__152 + QuantLinearNode__764_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_13/BiasAdd + StatefulPartitionedCall/model/activation_13/Relu, Tactic: 10485759, QuantLinearNode__760:0[Int8(1,16,32,32)] -> QuantLinearNode__768:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_14/transpose__162 + QuantLinearNode__772_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_14/BiasAdd + StatefulPartitionedCall/model/add_6/add + StatefulPartitionedCall/model/activation_14/Relu, Tactic: -7665878956594041863, QuantLinearNode__768:0[Int8(1,16,32,32)], QuantLinearNode__760:0[Int8(1,16,32,32)] -> QuantLinearNode__780:0[Int8(1,16,32,32)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_15/transpose__174 + QuantLinearNode__784_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_15/BiasAdd + StatefulPartitionedCall/model/activation_15/Relu, Tactic: 10485759, QuantLinearNode__780:0[Int8(1,16,32,32)] -> QuantLinearNode__788:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_16/transpose__184 + QuantLinearNode__792_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_16/BiasAdd + StatefulPartitionedCall/model/add_7/add + StatefulPartitionedCall/model/activation_16/Relu, Tactic: -7665878956594041863, QuantLinearNode__788:0[Int8(1,16,32,32)], QuantLinearNode__780:0[Int8(1,16,32,32)] -> QuantLinearNode__800:0[Int8(1,16,32,32)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_17/transpose__196 + QuantLinearNode__804_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_17/BiasAdd + StatefulPartitionedCall/model/activation_17/Relu, Tactic: 10485759, QuantLinearNode__800:0[Int8(1,16,32,32)] -> QuantLinearNode__808:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_18/transpose__206 + QuantLinearNode__812_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_18/BiasAdd + StatefulPartitionedCall/model/add_8/add + StatefulPartitionedCall/model/activation_18/Relu, Tactic: -7665878956594041863, QuantLinearNode__808:0[Int8(1,16,32,32)], QuantLinearNode__800:0[Int8(1,16,32,32)] -> QuantLinearNode__828:0[Int8(1,16,32,32)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_19/transpose__224 + QuantLinearNode__832_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_19/BiasAdd + StatefulPartitionedCall/model/activation_19/Relu, Tactic: -7665878956594041863, QuantLinearNode__828:0[Int8(1,16,32,32)] -> QuantLinearNode__836:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_21/transpose__216 + QuantLinearNode__820_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_21/BiasAdd, Tactic: -7665878956594041863, QuantLinearNode__828:0[Int8(1,16,32,32)] -> QuantLinearNode__824:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_20/transpose__234 + QuantLinearNode__840_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_20/BiasAdd + StatefulPartitionedCall/model/add_9/add + StatefulPartitionedCall/model/activation_20/Relu, Tactic: -7665878956594041863, QuantLinearNode__836:0[Int8(1,32,16,16)], QuantLinearNode__824:0[Int8(1,32,16,16)] -> QuantLinearNode__848:0[Int8(1,32,16,16)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_22/transpose__246 + QuantLinearNode__852_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_22/BiasAdd + StatefulPartitionedCall/model/activation_21/Relu, Tactic: 2621439, QuantLinearNode__848:0[Int8(1,32,16,16)] -> QuantLinearNode__856:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_23/transpose__256 + QuantLinearNode__860_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_23/BiasAdd + StatefulPartitionedCall/model/add_10/add + StatefulPartitionedCall/model/activation_22/Relu, Tactic: -7665878956594041863, QuantLinearNode__856:0[Int8(1,32,16,16)], QuantLinearNode__848:0[Int8(1,32,16,16)] -> QuantLinearNode__868:0[Int8(1,32,16,16)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_24/transpose__268 + QuantLinearNode__872_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_24/BiasAdd + StatefulPartitionedCall/model/activation_23/Relu, Tactic: 2621439, QuantLinearNode__868:0[Int8(1,32,16,16)] -> QuantLinearNode__876:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_25/transpose__278 + QuantLinearNode__880_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_25/BiasAdd + StatefulPartitionedCall/model/add_11/add + StatefulPartitionedCall/model/activation_24/Relu, Tactic: -7665878956594041863, QuantLinearNode__876:0[Int8(1,32,16,16)], QuantLinearNode__868:0[Int8(1,32,16,16)] -> QuantLinearNode__888:0[Int8(1,32,16,16)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_26/transpose__290 + QuantLinearNode__892_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_26/BiasAdd + StatefulPartitionedCall/model/activation_25/Relu, Tactic: 2621439, QuantLinearNode__888:0[Int8(1,32,16,16)] -> QuantLinearNode__896:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_27/transpose__300 + QuantLinearNode__900_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_27/BiasAdd + StatefulPartitionedCall/model/add_12/add + StatefulPartitionedCall/model/activation_26/Relu, Tactic: -7665878956594041863, QuantLinearNode__896:0[Int8(1,32,16,16)], QuantLinearNode__888:0[Int8(1,32,16,16)] -> QuantLinearNode__908:0[Int8(1,32,16,16)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_28/transpose__312 + QuantLinearNode__912_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_28/BiasAdd + StatefulPartitionedCall/model/activation_27/Relu, Tactic: 2621439, QuantLinearNode__908:0[Int8(1,32,16,16)] -> QuantLinearNode__916:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_29/transpose__322 + QuantLinearNode__920_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_29/BiasAdd + StatefulPartitionedCall/model/add_13/add + StatefulPartitionedCall/model/activation_28/Relu, Tactic: -7665878956594041863, QuantLinearNode__916:0[Int8(1,32,16,16)], QuantLinearNode__908:0[Int8(1,32,16,16)] -> QuantLinearNode__928:0[Int8(1,32,16,16)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_30/transpose__334 + QuantLinearNode__932_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_30/BiasAdd + StatefulPartitionedCall/model/activation_29/Relu, Tactic: 2621439, QuantLinearNode__928:0[Int8(1,32,16,16)] -> QuantLinearNode__936:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_31/transpose__344 + QuantLinearNode__940_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_31/BiasAdd + StatefulPartitionedCall/model/add_14/add + StatefulPartitionedCall/model/activation_30/Relu, Tactic: -7665878956594041863, QuantLinearNode__936:0[Int8(1,32,16,16)], QuantLinearNode__928:0[Int8(1,32,16,16)] -> QuantLinearNode__948:0[Int8(1,32,16,16)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_32/transpose__356 + QuantLinearNode__952_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_32/BiasAdd + StatefulPartitionedCall/model/activation_31/Relu, Tactic: 2621439, QuantLinearNode__948:0[Int8(1,32,16,16)] -> QuantLinearNode__956:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_33/transpose__366 + QuantLinearNode__960_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_33/BiasAdd + StatefulPartitionedCall/model/add_15/add + StatefulPartitionedCall/model/activation_32/Relu, Tactic: -7665878956594041863, QuantLinearNode__956:0[Int8(1,32,16,16)], QuantLinearNode__948:0[Int8(1,32,16,16)] -> QuantLinearNode__968:0[Int8(1,32,16,16)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_34/transpose__378 + QuantLinearNode__972_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_34/BiasAdd + StatefulPartitionedCall/model/activation_33/Relu, Tactic: 2621439, QuantLinearNode__968:0[Int8(1,32,16,16)] -> QuantLinearNode__976:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_35/transpose__388 + QuantLinearNode__980_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_35/BiasAdd + StatefulPartitionedCall/model/add_16/add + StatefulPartitionedCall/model/activation_34/Relu, Tactic: -7665878956594041863, QuantLinearNode__976:0[Int8(1,32,16,16)], QuantLinearNode__968:0[Int8(1,32,16,16)] -> QuantLinearNode__988:0[Int8(1,32,16,16)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_36/transpose__400 + QuantLinearNode__992_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_36/BiasAdd + StatefulPartitionedCall/model/activation_35/Relu, Tactic: 2621439, QuantLinearNode__988:0[Int8(1,32,16,16)] -> QuantLinearNode__996:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_37/transpose__410 + QuantLinearNode__1000_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_37/BiasAdd + StatefulPartitionedCall/model/add_17/add + StatefulPartitionedCall/model/activation_36/Relu, Tactic: -7665878956594041863, QuantLinearNode__996:0[Int8(1,32,16,16)], QuantLinearNode__988:0[Int8(1,32,16,16)] -> QuantLinearNode__1016:0[Int8(1,32,16,16)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_38/transpose__428 + QuantLinearNode__1020_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_38/BiasAdd + StatefulPartitionedCall/model/activation_37/Relu, Tactic: -7665878956594041863, QuantLinearNode__1016:0[Int8(1,32,16,16)] -> QuantLinearNode__1024:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_40/transpose__420 + QuantLinearNode__1008_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_40/BiasAdd, Tactic: -7665878956594041863, QuantLinearNode__1016:0[Int8(1,32,16,16)] -> QuantLinearNode__1012:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_39/transpose__438 + QuantLinearNode__1028_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_39/BiasAdd + StatefulPartitionedCall/model/add_18/add + StatefulPartitionedCall/model/activation_38/Relu, Tactic: -7665878956594041863, QuantLinearNode__1024:0[Int8(1,64,8,8)], QuantLinearNode__1012:0[Int8(1,64,8,8)] -> QuantLinearNode__1036:0[Int8(1,64,8,8)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_41/transpose__450 + QuantLinearNode__1040_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_41/BiasAdd + StatefulPartitionedCall/model/activation_39/Relu, Tactic: 2621439, QuantLinearNode__1036:0[Int8(1,64,8,8)] -> QuantLinearNode__1044:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_42/transpose__460 + QuantLinearNode__1048_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_42/BiasAdd + StatefulPartitionedCall/model/add_19/add + StatefulPartitionedCall/model/activation_40/Relu, Tactic: -7665878956594041863, QuantLinearNode__1044:0[Int8(1,64,8,8)], QuantLinearNode__1036:0[Int8(1,64,8,8)] -> QuantLinearNode__1056:0[Int8(1,64,8,8)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_43/transpose__472 + QuantLinearNode__1060_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_43/BiasAdd + StatefulPartitionedCall/model/activation_41/Relu, Tactic: 2621439, QuantLinearNode__1056:0[Int8(1,64,8,8)] -> QuantLinearNode__1064:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_44/transpose__482 + QuantLinearNode__1068_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_44/BiasAdd + StatefulPartitionedCall/model/add_20/add + StatefulPartitionedCall/model/activation_42/Relu, Tactic: -7665878956594041863, QuantLinearNode__1064:0[Int8(1,64,8,8)], QuantLinearNode__1056:0[Int8(1,64,8,8)] -> QuantLinearNode__1076:0[Int8(1,64,8,8)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_45/transpose__494 + QuantLinearNode__1080_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_45/BiasAdd + StatefulPartitionedCall/model/activation_43/Relu, Tactic: 2621439, QuantLinearNode__1076:0[Int8(1,64,8,8)] -> QuantLinearNode__1084:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_46/transpose__504 + QuantLinearNode__1088_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_46/BiasAdd + StatefulPartitionedCall/model/add_21/add + StatefulPartitionedCall/model/activation_44/Relu, Tactic: -7665878956594041863, QuantLinearNode__1084:0[Int8(1,64,8,8)], QuantLinearNode__1076:0[Int8(1,64,8,8)] -> QuantLinearNode__1096:0[Int8(1,64,8,8)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_47/transpose__516 + QuantLinearNode__1100_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_47/BiasAdd + StatefulPartitionedCall/model/activation_45/Relu, Tactic: 2621439, QuantLinearNode__1096:0[Int8(1,64,8,8)] -> QuantLinearNode__1104:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_48/transpose__526 + QuantLinearNode__1108_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_48/BiasAdd + StatefulPartitionedCall/model/add_22/add + StatefulPartitionedCall/model/activation_46/Relu, Tactic: -7665878956594041863, QuantLinearNode__1104:0[Int8(1,64,8,8)], QuantLinearNode__1096:0[Int8(1,64,8,8)] -> QuantLinearNode__1116:0[Int8(1,64,8,8)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_49/transpose__538 + QuantLinearNode__1120_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_49/BiasAdd + StatefulPartitionedCall/model/activation_47/Relu, Tactic: 2621439, QuantLinearNode__1116:0[Int8(1,64,8,8)] -> QuantLinearNode__1124:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_50/transpose__548 + QuantLinearNode__1128_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_50/BiasAdd + StatefulPartitionedCall/model/add_23/add + StatefulPartitionedCall/model/activation_48/Relu, Tactic: -7665878956594041863, QuantLinearNode__1124:0[Int8(1,64,8,8)], QuantLinearNode__1116:0[Int8(1,64,8,8)] -> QuantLinearNode__1136:0[Int8(1,64,8,8)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_51/transpose__560 + QuantLinearNode__1140_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_51/BiasAdd + StatefulPartitionedCall/model/activation_49/Relu, Tactic: 2621439, QuantLinearNode__1136:0[Int8(1,64,8,8)] -> QuantLinearNode__1144:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_52/transpose__570 + QuantLinearNode__1148_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_52/BiasAdd + StatefulPartitionedCall/model/add_24/add + StatefulPartitionedCall/model/activation_50/Relu, Tactic: -7665878956594041863, QuantLinearNode__1144:0[Int8(1,64,8,8)], QuantLinearNode__1136:0[Int8(1,64,8,8)] -> QuantLinearNode__1156:0[Int8(1,64,8,8)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_53/transpose__582 + QuantLinearNode__1160_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_53/BiasAdd + StatefulPartitionedCall/model/activation_51/Relu, Tactic: 2621439, QuantLinearNode__1156:0[Int8(1,64,8,8)] -> QuantLinearNode__1164:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_54/transpose__592 + QuantLinearNode__1168_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_54/BiasAdd + StatefulPartitionedCall/model/add_25/add + StatefulPartitionedCall/model/activation_52/Relu, Tactic: -7665878956594041863, QuantLinearNode__1164:0[Int8(1,64,8,8)], QuantLinearNode__1156:0[Int8(1,64,8,8)] -> QuantLinearNode__1176:0[Int8(1,64,8,8)] Layer(Scale): DequantLinearNode__1173_quantize_scale_node, Tactic: 0, QuantLinearNode__1176:0[Int8(1,64,8,8)] -> StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0[Float(1,64,8,8)] Layer(FusedConvActConvolution): StatefulPartitionedCall/model/quant_conv2d_55/transpose__604 + QuantLinearNode__1180_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_55/BiasAdd + StatefulPartitionedCall/model/activation_53/Relu, Tactic: 2621439, QuantLinearNode__1176:0[Int8(1,64,8,8)] -> QuantLinearNode__1184:0[Int8(1,64,8,8)] Layer(CaskConvolution): StatefulPartitionedCall/model/quant_conv2d_56/transpose__614 + QuantLinearNode__1188_quantize_scale_node + StatefulPartitionedCall/model/quant_conv2d_56/BiasAdd + StatefulPartitionedCall/model/add_26/add + StatefulPartitionedCall/model/activation_54/Relu, Tactic: -3461203036104521071, QuantLinearNode__1184:0[Int8(1,64,8,8)], StatefulPartitionedCall/model/quant_identity_26/quantize_and_dequantize:0[Float(1,64,8,8)] -> StatefulPartitionedCall/model/activation_54/Relu:0[Float(1,64,8,8)] Layer(CudnnPooling): StatefulPartitionedCall/model/global_average_pooling2d/Mean, Tactic: -1, StatefulPartitionedCall/model/activation_54/Relu:0[Float(1,64,8,8)] -> StatefulPartitionedCall/model/global_average_pooling2d/Mean:0[Float(1,64,1,1)] Layer(CublasConvolution): StatefulPartitionedCall/model/dense/MatMul + StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + unsqueeze_node_after_StatefulPartitionedCall/model/dense/BiasAdd/ReadVariableOp__623 + (Unnamed Layer* 1115) [Shuffle] + StatefulPartitionedCall/model/dense/BiasAdd, Tactic: 1, StatefulPartitionedCall/model/global_average_pooling2d/Mean:0[Float(1,64,1,1)] -> StatefulPartitionedCall/model/dense/BiasAdd_out_tensor[Float(1,10,1,1)] [10/04/2021-21:34:43] [I] [TRT] [MemUsageSnapshot] Builder end: CPU 9820 MiB, GPU 1175 MiB [10/04/2021-21:34:43] [I] [TRT] Loaded engine size: 1 MB [10/04/2021-21:34:43] [I] [TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 9819 MiB, GPU 1171 MiB [10/04/2021-21:34:43] [V] [TRT] Using cublas a tactic source [10/04/2021-21:34:43] [10/04/2021-21:34:43] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 9819, GPU 1179 (MiB) [10/04/2021-21:34:43] [V] [TRT] Using cuDNN as a tactic source [10/04/2021-21:34:43] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 9819, GPU 1187 (MiB) [10/04/2021-21:34:43] [10/04/2021-21:34:43] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 9817, GPU 1171 (MiB) [10/04/2021-21:34:43] [V] [TRT] Deserialization required 17054 microseconds. [10/04/2021-21:34:43] [I] [TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 9817 MiB, GPU 1171 MiB [10/04/2021-21:34:43] [I] Engine built in 7.6514 sec. [10/04/2021-21:34:43] [I] [TRT] [MemUsageSnapshot] ExecutionContext creation begin: CPU 9816 MiB, GPU 1171 MiB [10/04/2021-21:34:43] [V] [TRT] Using cublas a tactic source [10/04/2021-21:34:43] [10/04/2021-21:34:43] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 9816, GPU 1179 (MiB) [10/04/2021-21:34:43] [V] [TRT] Using cuDNN as a tactic source [10/04/2021-21:34:43] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 9816, GPU 1187 (MiB) [10/04/2021-21:34:43] [10/04/2021-21:34:43] [V] [TRT] Total per-runner device memory is 596992 [10/04/2021-21:34:43] [V] [TRT] Total per-runner host memory is 108448 [10/04/2021-21:34:43] [V] [TRT] Allocated activation device memory of size 49152 [10/04/2021-21:34:43] [I] [TRT] [MemUsageSnapshot] ExecutionContext creation end: CPU 9816 MiB, GPU 1187 MiB [10/04/2021-21:34:43] [I] Created input binding for input_1 with dimensions 1x3x32x32 [10/04/2021-21:34:43] [I] Created output binding for dense with dimensions 1x10 [10/04/2021-21:34:43] [I] Starting inference [10/04/2021-21:34:46] [I] Warmup completed 285 queries over 200 ms [10/04/2021-21:34:46] [I] Timing trace has 5160 queries over 3.00066 s [10/04/2021-21:34:46] [I] [10/04/2021-21:34:46] [I] === Trace details === [10/04/2021-21:34:46] [I] Trace averages of 10 runs: [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.596777 ms - Host latency: 0.596777 ms (end to end 0.596777 ms, enqueue 0.024411 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.59716 ms - Host latency: 0.59716 ms (end to end 0.59716 ms, enqueue 0.0224808 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.612576 ms - Host latency: 0.612576 ms (end to end 0.612576 ms, enqueue 0.0289291 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.547342 ms - Host latency: 0.547342 ms (end to end 0.547342 ms, enqueue 0.0256821 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.544937 ms - Host latency: 0.544937 ms (end to end 0.544937 ms, enqueue 0.0241104 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.595007 ms - Host latency: 0.595007 ms (end to end 0.595007 ms, enqueue 0.0274612 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.544794 ms - Host latency: 0.544794 ms (end to end 0.544794 ms, enqueue 0.0272629 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.546664 ms - Host latency: 0.546664 ms (end to end 0.546664 ms, enqueue 0.0301392 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.555252 ms - Host latency: 0.555252 ms (end to end 0.555252 ms, enqueue 0.0230347 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.57103 ms - Host latency: 0.57103 ms (end to end 0.57103 ms, enqueue 0.0259796 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.544254 ms - Host latency: 0.544254 ms (end to end 0.544254 ms, enqueue 0.0300293 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.544202 ms - Host latency: 0.544202 ms (end to end 0.544202 ms, enqueue 0.0238251 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.545139 ms - Host latency: 0.545139 ms (end to end 0.545139 ms, enqueue 0.023114 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.54393 ms - Host latency: 0.54393 ms (end to end 0.54393 ms, enqueue 0.0255981 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.543951 ms - Host latency: 0.543951 ms (end to end 0.543951 ms, enqueue 0.0226563 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.543765 ms - Host latency: 0.543765 ms (end to end 0.543765 ms, enqueue 0.0254547 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.544284 ms - Host latency: 0.544284 ms (end to end 0.544284 ms, enqueue 0.0223022 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.544937 ms - Host latency: 0.544937 ms (end to end 0.544937 ms, enqueue 0.0228363 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.545953 ms - Host latency: 0.545953 ms (end to end 0.545953 ms, enqueue 0.0252747 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.544379 ms - Host latency: 0.544379 ms (end to end 0.544379 ms, enqueue 0.0222809 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.544925 ms - Host latency: 0.544925 ms (end to end 0.544925 ms, enqueue 0.0291199 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.544583 ms - Host latency: 0.544583 ms (end to end 0.544583 ms, enqueue 0.0226288 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.54386 ms - Host latency: 0.54386 ms (end to end 0.54386 ms, enqueue 0.0224304 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.544406 ms - Host latency: 0.544406 ms (end to end 0.544406 ms, enqueue 0.0223602 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.54429 ms - Host latency: 0.54429 ms (end to end 0.54429 ms, enqueue 0.0239716 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.544119 ms - Host latency: 0.544119 ms (end to end 0.544119 ms, enqueue 0.0221985 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.546115 ms - Host latency: 0.546115 ms (end to end 0.546115 ms, enqueue 0.0269623 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.54407 ms - Host latency: 0.54407 ms (end to end 0.54407 ms, enqueue 0.0264496 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.547278 ms - Host latency: 0.547278 ms (end to end 0.547278 ms, enqueue 0.0238007 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.537412 ms - Host latency: 0.537412 ms (end to end 0.537412 ms, enqueue 0.0329407 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.511285 ms - Host latency: 0.511285 ms (end to end 0.511285 ms, enqueue 0.0224457 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498898 ms - Host latency: 0.498898 ms (end to end 0.498898 ms, enqueue 0.0224426 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498801 ms - Host latency: 0.498801 ms (end to end 0.498801 ms, enqueue 0.0226929 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.51044 ms - Host latency: 0.51044 ms (end to end 0.51044 ms, enqueue 0.0230194 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.4991 ms - Host latency: 0.4991 ms (end to end 0.4991 ms, enqueue 0.0230957 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.497989 ms - Host latency: 0.497989 ms (end to end 0.497989 ms, enqueue 0.0221649 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.497781 ms - Host latency: 0.497781 ms (end to end 0.497781 ms, enqueue 0.0243958 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498138 ms - Host latency: 0.498138 ms (end to end 0.498138 ms, enqueue 0.0273132 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.504343 ms - Host latency: 0.504343 ms (end to end 0.504343 ms, enqueue 0.0227966 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.501535 ms - Host latency: 0.501535 ms (end to end 0.501535 ms, enqueue 0.0253082 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.499323 ms - Host latency: 0.499323 ms (end to end 0.499323 ms, enqueue 0.0226715 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498657 ms - Host latency: 0.498657 ms (end to end 0.498657 ms, enqueue 0.0333527 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498712 ms - Host latency: 0.498712 ms (end to end 0.498712 ms, enqueue 0.0226318 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498184 ms - Host latency: 0.498184 ms (end to end 0.498184 ms, enqueue 0.0326813 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498944 ms - Host latency: 0.498944 ms (end to end 0.498944 ms, enqueue 0.0305084 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.497842 ms - Host latency: 0.497842 ms (end to end 0.497842 ms, enqueue 0.022641 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496521 ms - Host latency: 0.496521 ms (end to end 0.496521 ms, enqueue 0.0237396 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496799 ms - Host latency: 0.496799 ms (end to end 0.496799 ms, enqueue 0.0225494 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496835 ms - Host latency: 0.496835 ms (end to end 0.496835 ms, enqueue 0.0224792 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49729 ms - Host latency: 0.49729 ms (end to end 0.49729 ms, enqueue 0.0229095 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498358 ms - Host latency: 0.498358 ms (end to end 0.498358 ms, enqueue 0.0220947 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.499353 ms - Host latency: 0.499353 ms (end to end 0.499353 ms, enqueue 0.0259155 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498645 ms - Host latency: 0.498645 ms (end to end 0.498645 ms, enqueue 0.0225647 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.502606 ms - Host latency: 0.502606 ms (end to end 0.502606 ms, enqueue 0.0267212 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.533423 ms - Host latency: 0.533423 ms (end to end 0.533423 ms, enqueue 0.0236145 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496899 ms - Host latency: 0.496899 ms (end to end 0.496899 ms, enqueue 0.023053 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496545 ms - Host latency: 0.496545 ms (end to end 0.496545 ms, enqueue 0.0238647 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498627 ms - Host latency: 0.498627 ms (end to end 0.498627 ms, enqueue 0.0237915 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495117 ms - Host latency: 0.495117 ms (end to end 0.495117 ms, enqueue 0.0248474 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491376 ms - Host latency: 0.491376 ms (end to end 0.491376 ms, enqueue 0.027124 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488599 ms - Host latency: 0.488599 ms (end to end 0.488599 ms, enqueue 0.0222046 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491748 ms - Host latency: 0.491748 ms (end to end 0.491748 ms, enqueue 0.0227539 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489362 ms - Host latency: 0.489362 ms (end to end 0.489362 ms, enqueue 0.0225586 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49007 ms - Host latency: 0.49007 ms (end to end 0.49007 ms, enqueue 0.0240356 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488544 ms - Host latency: 0.488544 ms (end to end 0.488544 ms, enqueue 0.0222595 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488702 ms - Host latency: 0.488702 ms (end to end 0.488702 ms, enqueue 0.0271545 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489142 ms - Host latency: 0.489142 ms (end to end 0.489142 ms, enqueue 0.0236267 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488177 ms - Host latency: 0.488177 ms (end to end 0.488177 ms, enqueue 0.0224976 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495203 ms - Host latency: 0.495203 ms (end to end 0.495203 ms, enqueue 0.0242981 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49079 ms - Host latency: 0.49079 ms (end to end 0.49079 ms, enqueue 0.031781 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491528 ms - Host latency: 0.491528 ms (end to end 0.491528 ms, enqueue 0.0287109 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489764 ms - Host latency: 0.489764 ms (end to end 0.489764 ms, enqueue 0.0223511 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495837 ms - Host latency: 0.495837 ms (end to end 0.495837 ms, enqueue 0.0284546 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49679 ms - Host latency: 0.49679 ms (end to end 0.49679 ms, enqueue 0.0279541 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493878 ms - Host latency: 0.493878 ms (end to end 0.493878 ms, enqueue 0.0250244 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494141 ms - Host latency: 0.494141 ms (end to end 0.494141 ms, enqueue 0.02323 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492352 ms - Host latency: 0.492352 ms (end to end 0.492352 ms, enqueue 0.0239563 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492377 ms - Host latency: 0.492377 ms (end to end 0.492377 ms, enqueue 0.0227722 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492712 ms - Host latency: 0.492712 ms (end to end 0.492712 ms, enqueue 0.0223572 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494379 ms - Host latency: 0.494379 ms (end to end 0.494379 ms, enqueue 0.0251648 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490125 ms - Host latency: 0.490125 ms (end to end 0.490125 ms, enqueue 0.0222778 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489679 ms - Host latency: 0.489679 ms (end to end 0.489679 ms, enqueue 0.0298645 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491455 ms - Host latency: 0.491455 ms (end to end 0.491455 ms, enqueue 0.0253296 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491083 ms - Host latency: 0.491083 ms (end to end 0.491083 ms, enqueue 0.0224731 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493793 ms - Host latency: 0.493793 ms (end to end 0.493793 ms, enqueue 0.0226318 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488757 ms - Host latency: 0.488757 ms (end to end 0.488757 ms, enqueue 0.0223694 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494226 ms - Host latency: 0.494226 ms (end to end 0.494226 ms, enqueue 0.0256775 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.497839 ms - Host latency: 0.497839 ms (end to end 0.497839 ms, enqueue 0.0222229 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488855 ms - Host latency: 0.488855 ms (end to end 0.488855 ms, enqueue 0.0225708 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489575 ms - Host latency: 0.489575 ms (end to end 0.489575 ms, enqueue 0.0234802 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489587 ms - Host latency: 0.489587 ms (end to end 0.489587 ms, enqueue 0.0262085 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490485 ms - Host latency: 0.490485 ms (end to end 0.490485 ms, enqueue 0.0231995 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488312 ms - Host latency: 0.488312 ms (end to end 0.488312 ms, enqueue 0.0236816 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490509 ms - Host latency: 0.490509 ms (end to end 0.490509 ms, enqueue 0.02547 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490594 ms - Host latency: 0.490594 ms (end to end 0.490594 ms, enqueue 0.0221436 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489404 ms - Host latency: 0.489404 ms (end to end 0.489404 ms, enqueue 0.0237732 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489185 ms - Host latency: 0.489185 ms (end to end 0.489185 ms, enqueue 0.0226379 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489569 ms - Host latency: 0.489569 ms (end to end 0.489569 ms, enqueue 0.0243896 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491376 ms - Host latency: 0.491376 ms (end to end 0.491376 ms, enqueue 0.0246521 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490448 ms - Host latency: 0.490448 ms (end to end 0.490448 ms, enqueue 0.024353 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490088 ms - Host latency: 0.490088 ms (end to end 0.490088 ms, enqueue 0.0244263 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489203 ms - Host latency: 0.489203 ms (end to end 0.489203 ms, enqueue 0.0224121 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.508826 ms - Host latency: 0.508826 ms (end to end 0.508826 ms, enqueue 0.0235718 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488562 ms - Host latency: 0.488562 ms (end to end 0.488562 ms, enqueue 0.0241577 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489478 ms - Host latency: 0.489478 ms (end to end 0.489478 ms, enqueue 0.0261475 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.48963 ms - Host latency: 0.48963 ms (end to end 0.48963 ms, enqueue 0.0270935 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488977 ms - Host latency: 0.488977 ms (end to end 0.488977 ms, enqueue 0.0222534 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489062 ms - Host latency: 0.489062 ms (end to end 0.489062 ms, enqueue 0.0250549 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490686 ms - Host latency: 0.490686 ms (end to end 0.490686 ms, enqueue 0.0243835 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491266 ms - Host latency: 0.491266 ms (end to end 0.491266 ms, enqueue 0.0265076 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490558 ms - Host latency: 0.490558 ms (end to end 0.490558 ms, enqueue 0.0229614 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491107 ms - Host latency: 0.491107 ms (end to end 0.491107 ms, enqueue 0.0224915 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495007 ms - Host latency: 0.495007 ms (end to end 0.495007 ms, enqueue 0.026709 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488269 ms - Host latency: 0.488269 ms (end to end 0.488269 ms, enqueue 0.0222656 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.497156 ms - Host latency: 0.497156 ms (end to end 0.497156 ms, enqueue 0.0278503 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493927 ms - Host latency: 0.493927 ms (end to end 0.493927 ms, enqueue 0.0259155 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489996 ms - Host latency: 0.489996 ms (end to end 0.489996 ms, enqueue 0.0226013 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488049 ms - Host latency: 0.488049 ms (end to end 0.488049 ms, enqueue 0.0220398 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488245 ms - Host latency: 0.488245 ms (end to end 0.488245 ms, enqueue 0.0255798 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488647 ms - Host latency: 0.488647 ms (end to end 0.488647 ms, enqueue 0.0221802 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493652 ms - Host latency: 0.493652 ms (end to end 0.493652 ms, enqueue 0.0232117 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489233 ms - Host latency: 0.489233 ms (end to end 0.489233 ms, enqueue 0.0220337 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490472 ms - Host latency: 0.490472 ms (end to end 0.490472 ms, enqueue 0.0221252 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490515 ms - Host latency: 0.490515 ms (end to end 0.490515 ms, enqueue 0.0258362 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488519 ms - Host latency: 0.488519 ms (end to end 0.488519 ms, enqueue 0.0244568 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493243 ms - Host latency: 0.493243 ms (end to end 0.493243 ms, enqueue 0.0228638 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.501678 ms - Host latency: 0.501678 ms (end to end 0.501678 ms, enqueue 0.0229309 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496478 ms - Host latency: 0.496478 ms (end to end 0.496478 ms, enqueue 0.0223633 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498059 ms - Host latency: 0.498059 ms (end to end 0.498059 ms, enqueue 0.0223206 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.502325 ms - Host latency: 0.502325 ms (end to end 0.502325 ms, enqueue 0.0248535 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.50304 ms - Host latency: 0.50304 ms (end to end 0.50304 ms, enqueue 0.022345 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489636 ms - Host latency: 0.489636 ms (end to end 0.489636 ms, enqueue 0.0261597 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487994 ms - Host latency: 0.487994 ms (end to end 0.487994 ms, enqueue 0.023877 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489453 ms - Host latency: 0.489453 ms (end to end 0.489453 ms, enqueue 0.02229 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488464 ms - Host latency: 0.488464 ms (end to end 0.488464 ms, enqueue 0.0222046 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494299 ms - Host latency: 0.494299 ms (end to end 0.494299 ms, enqueue 0.0241821 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.500714 ms - Host latency: 0.500714 ms (end to end 0.500714 ms, enqueue 0.028064 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488989 ms - Host latency: 0.488989 ms (end to end 0.488989 ms, enqueue 0.0230896 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490356 ms - Host latency: 0.490356 ms (end to end 0.490356 ms, enqueue 0.0244263 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490735 ms - Host latency: 0.490735 ms (end to end 0.490735 ms, enqueue 0.0225647 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488757 ms - Host latency: 0.488757 ms (end to end 0.488757 ms, enqueue 0.0262085 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488574 ms - Host latency: 0.488574 ms (end to end 0.488574 ms, enqueue 0.0240845 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488062 ms - Host latency: 0.488062 ms (end to end 0.488062 ms, enqueue 0.0244629 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492981 ms - Host latency: 0.492981 ms (end to end 0.492981 ms, enqueue 0.0243652 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490417 ms - Host latency: 0.490417 ms (end to end 0.490417 ms, enqueue 0.0224609 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489648 ms - Host latency: 0.489648 ms (end to end 0.489648 ms, enqueue 0.026001 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489026 ms - Host latency: 0.489026 ms (end to end 0.489026 ms, enqueue 0.0257446 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489185 ms - Host latency: 0.489185 ms (end to end 0.489185 ms, enqueue 0.0223633 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488245 ms - Host latency: 0.488245 ms (end to end 0.488245 ms, enqueue 0.0249512 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495129 ms - Host latency: 0.495129 ms (end to end 0.495129 ms, enqueue 0.0261841 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492554 ms - Host latency: 0.492554 ms (end to end 0.492554 ms, enqueue 0.0285645 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492615 ms - Host latency: 0.492615 ms (end to end 0.492615 ms, enqueue 0.0368896 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492725 ms - Host latency: 0.492725 ms (end to end 0.492725 ms, enqueue 0.0331055 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489136 ms - Host latency: 0.489136 ms (end to end 0.489136 ms, enqueue 0.0354004 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.504053 ms - Host latency: 0.504053 ms (end to end 0.504053 ms, enqueue 0.0331299 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488635 ms - Host latency: 0.488635 ms (end to end 0.488635 ms, enqueue 0.0303955 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488831 ms - Host latency: 0.488831 ms (end to end 0.488831 ms, enqueue 0.0258789 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491907 ms - Host latency: 0.491907 ms (end to end 0.491907 ms, enqueue 0.0224487 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491309 ms - Host latency: 0.491309 ms (end to end 0.491309 ms, enqueue 0.0226685 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489185 ms - Host latency: 0.489185 ms (end to end 0.489185 ms, enqueue 0.0264526 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489246 ms - Host latency: 0.489246 ms (end to end 0.489246 ms, enqueue 0.031665 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494116 ms - Host latency: 0.494116 ms (end to end 0.494116 ms, enqueue 0.0265869 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.497839 ms - Host latency: 0.497839 ms (end to end 0.497839 ms, enqueue 0.02948 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494348 ms - Host latency: 0.494348 ms (end to end 0.494348 ms, enqueue 0.0240112 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489587 ms - Host latency: 0.489587 ms (end to end 0.489587 ms, enqueue 0.022522 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494238 ms - Host latency: 0.494238 ms (end to end 0.494238 ms, enqueue 0.0229736 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488232 ms - Host latency: 0.488232 ms (end to end 0.488232 ms, enqueue 0.022583 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489258 ms - Host latency: 0.489258 ms (end to end 0.489258 ms, enqueue 0.0224365 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488904 ms - Host latency: 0.488904 ms (end to end 0.488904 ms, enqueue 0.025708 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489514 ms - Host latency: 0.489514 ms (end to end 0.489514 ms, enqueue 0.0239136 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488464 ms - Host latency: 0.488464 ms (end to end 0.488464 ms, enqueue 0.0240479 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491309 ms - Host latency: 0.491309 ms (end to end 0.491309 ms, enqueue 0.0261597 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.521545 ms - Host latency: 0.521545 ms (end to end 0.521545 ms, enqueue 0.0302002 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489343 ms - Host latency: 0.489343 ms (end to end 0.489343 ms, enqueue 0.0316528 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492859 ms - Host latency: 0.492859 ms (end to end 0.492859 ms, enqueue 0.0225342 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.542542 ms - Host latency: 0.542542 ms (end to end 0.542542 ms, enqueue 0.0232666 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490967 ms - Host latency: 0.490967 ms (end to end 0.490967 ms, enqueue 0.0276001 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488867 ms - Host latency: 0.488867 ms (end to end 0.488867 ms, enqueue 0.0228027 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492676 ms - Host latency: 0.492676 ms (end to end 0.492676 ms, enqueue 0.0255615 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488049 ms - Host latency: 0.488049 ms (end to end 0.488049 ms, enqueue 0.0225098 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490393 ms - Host latency: 0.490393 ms (end to end 0.490393 ms, enqueue 0.0253662 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490784 ms - Host latency: 0.490784 ms (end to end 0.490784 ms, enqueue 0.0237793 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492541 ms - Host latency: 0.492541 ms (end to end 0.492541 ms, enqueue 0.0229858 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488684 ms - Host latency: 0.488684 ms (end to end 0.488684 ms, enqueue 0.0224609 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490894 ms - Host latency: 0.490894 ms (end to end 0.490894 ms, enqueue 0.0235962 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49209 ms - Host latency: 0.49209 ms (end to end 0.49209 ms, enqueue 0.0255005 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487146 ms - Host latency: 0.487146 ms (end to end 0.487146 ms, enqueue 0.0240723 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.48916 ms - Host latency: 0.48916 ms (end to end 0.48916 ms, enqueue 0.022937 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496472 ms - Host latency: 0.496472 ms (end to end 0.496472 ms, enqueue 0.0278198 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488879 ms - Host latency: 0.488879 ms (end to end 0.488879 ms, enqueue 0.0221436 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488538 ms - Host latency: 0.488538 ms (end to end 0.488538 ms, enqueue 0.0248535 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490442 ms - Host latency: 0.490442 ms (end to end 0.490442 ms, enqueue 0.0254761 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493921 ms - Host latency: 0.493921 ms (end to end 0.493921 ms, enqueue 0.0265259 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49447 ms - Host latency: 0.49447 ms (end to end 0.49447 ms, enqueue 0.0223267 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491113 ms - Host latency: 0.491113 ms (end to end 0.491113 ms, enqueue 0.0259644 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.500415 ms - Host latency: 0.500415 ms (end to end 0.500415 ms, enqueue 0.0315308 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489661 ms - Host latency: 0.489661 ms (end to end 0.489661 ms, enqueue 0.0220215 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495947 ms - Host latency: 0.495947 ms (end to end 0.495947 ms, enqueue 0.0225952 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492432 ms - Host latency: 0.492432 ms (end to end 0.492432 ms, enqueue 0.0227539 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490344 ms - Host latency: 0.490344 ms (end to end 0.490344 ms, enqueue 0.0245117 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489404 ms - Host latency: 0.489404 ms (end to end 0.489404 ms, enqueue 0.0224487 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488196 ms - Host latency: 0.488196 ms (end to end 0.488196 ms, enqueue 0.028772 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488879 ms - Host latency: 0.488879 ms (end to end 0.488879 ms, enqueue 0.0356689 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491553 ms - Host latency: 0.491553 ms (end to end 0.491553 ms, enqueue 0.0306152 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492151 ms - Host latency: 0.492151 ms (end to end 0.492151 ms, enqueue 0.0271484 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488416 ms - Host latency: 0.488416 ms (end to end 0.488416 ms, enqueue 0.0227661 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488513 ms - Host latency: 0.488513 ms (end to end 0.488513 ms, enqueue 0.0220581 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489087 ms - Host latency: 0.489087 ms (end to end 0.489087 ms, enqueue 0.0226318 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494226 ms - Host latency: 0.494226 ms (end to end 0.494226 ms, enqueue 0.0221313 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488916 ms - Host latency: 0.488916 ms (end to end 0.488916 ms, enqueue 0.0266846 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488953 ms - Host latency: 0.488953 ms (end to end 0.488953 ms, enqueue 0.02677 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489966 ms - Host latency: 0.489966 ms (end to end 0.489966 ms, enqueue 0.0299194 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488953 ms - Host latency: 0.488953 ms (end to end 0.488953 ms, enqueue 0.0224121 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490784 ms - Host latency: 0.490784 ms (end to end 0.490784 ms, enqueue 0.0221313 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489734 ms - Host latency: 0.489734 ms (end to end 0.489734 ms, enqueue 0.0224854 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489246 ms - Host latency: 0.489246 ms (end to end 0.489246 ms, enqueue 0.0228027 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491589 ms - Host latency: 0.491589 ms (end to end 0.491589 ms, enqueue 0.0258545 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488489 ms - Host latency: 0.488489 ms (end to end 0.488489 ms, enqueue 0.0220459 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.505212 ms - Host latency: 0.505212 ms (end to end 0.505212 ms, enqueue 0.0269287 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492554 ms - Host latency: 0.492554 ms (end to end 0.492554 ms, enqueue 0.0240845 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494141 ms - Host latency: 0.494141 ms (end to end 0.494141 ms, enqueue 0.022583 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489282 ms - Host latency: 0.489282 ms (end to end 0.489282 ms, enqueue 0.0241455 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490735 ms - Host latency: 0.490735 ms (end to end 0.490735 ms, enqueue 0.0220215 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.48811 ms - Host latency: 0.48811 ms (end to end 0.48811 ms, enqueue 0.0245605 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492871 ms - Host latency: 0.492871 ms (end to end 0.492871 ms, enqueue 0.0255127 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491479 ms - Host latency: 0.491479 ms (end to end 0.491479 ms, enqueue 0.0277588 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489246 ms - Host latency: 0.489246 ms (end to end 0.489246 ms, enqueue 0.0254517 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491272 ms - Host latency: 0.491272 ms (end to end 0.491272 ms, enqueue 0.0227173 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492725 ms - Host latency: 0.492725 ms (end to end 0.492725 ms, enqueue 0.0233154 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491016 ms - Host latency: 0.491016 ms (end to end 0.491016 ms, enqueue 0.0223267 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489636 ms - Host latency: 0.489636 ms (end to end 0.489636 ms, enqueue 0.027417 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488977 ms - Host latency: 0.488977 ms (end to end 0.488977 ms, enqueue 0.0278564 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487549 ms - Host latency: 0.487549 ms (end to end 0.487549 ms, enqueue 0.022937 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.497351 ms - Host latency: 0.497351 ms (end to end 0.497351 ms, enqueue 0.0243896 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489148 ms - Host latency: 0.489148 ms (end to end 0.489148 ms, enqueue 0.022168 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490515 ms - Host latency: 0.490515 ms (end to end 0.490515 ms, enqueue 0.0266724 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490149 ms - Host latency: 0.490149 ms (end to end 0.490149 ms, enqueue 0.0288452 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489307 ms - Host latency: 0.489307 ms (end to end 0.489307 ms, enqueue 0.0287476 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489001 ms - Host latency: 0.489001 ms (end to end 0.489001 ms, enqueue 0.0225708 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490186 ms - Host latency: 0.490186 ms (end to end 0.490186 ms, enqueue 0.0235352 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489014 ms - Host latency: 0.489014 ms (end to end 0.489014 ms, enqueue 0.0289795 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493408 ms - Host latency: 0.493408 ms (end to end 0.493408 ms, enqueue 0.0264282 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496301 ms - Host latency: 0.496301 ms (end to end 0.496301 ms, enqueue 0.0244629 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488403 ms - Host latency: 0.488403 ms (end to end 0.488403 ms, enqueue 0.0230103 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489905 ms - Host latency: 0.489905 ms (end to end 0.489905 ms, enqueue 0.025769 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492383 ms - Host latency: 0.492383 ms (end to end 0.492383 ms, enqueue 0.0253784 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489001 ms - Host latency: 0.489001 ms (end to end 0.489001 ms, enqueue 0.0233643 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489038 ms - Host latency: 0.489038 ms (end to end 0.489038 ms, enqueue 0.02229 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489026 ms - Host latency: 0.489026 ms (end to end 0.489026 ms, enqueue 0.0221069 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489819 ms - Host latency: 0.489819 ms (end to end 0.489819 ms, enqueue 0.0254395 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489709 ms - Host latency: 0.489709 ms (end to end 0.489709 ms, enqueue 0.0221436 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.507214 ms - Host latency: 0.507214 ms (end to end 0.507214 ms, enqueue 0.0220459 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488355 ms - Host latency: 0.488355 ms (end to end 0.488355 ms, enqueue 0.0226685 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489783 ms - Host latency: 0.489783 ms (end to end 0.489783 ms, enqueue 0.0223877 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493347 ms - Host latency: 0.493347 ms (end to end 0.493347 ms, enqueue 0.0238037 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494299 ms - Host latency: 0.494299 ms (end to end 0.494299 ms, enqueue 0.0228271 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49342 ms - Host latency: 0.49342 ms (end to end 0.49342 ms, enqueue 0.0223755 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493835 ms - Host latency: 0.493835 ms (end to end 0.493835 ms, enqueue 0.0222046 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491882 ms - Host latency: 0.491882 ms (end to end 0.491882 ms, enqueue 0.0255493 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489282 ms - Host latency: 0.489282 ms (end to end 0.489282 ms, enqueue 0.027356 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491162 ms - Host latency: 0.491162 ms (end to end 0.491162 ms, enqueue 0.0225586 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49176 ms - Host latency: 0.49176 ms (end to end 0.49176 ms, enqueue 0.0250122 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495642 ms - Host latency: 0.495642 ms (end to end 0.495642 ms, enqueue 0.0221802 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489514 ms - Host latency: 0.489514 ms (end to end 0.489514 ms, enqueue 0.0229126 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487488 ms - Host latency: 0.487488 ms (end to end 0.487488 ms, enqueue 0.0283569 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492932 ms - Host latency: 0.492932 ms (end to end 0.492932 ms, enqueue 0.0259888 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.500085 ms - Host latency: 0.500085 ms (end to end 0.500085 ms, enqueue 0.0243408 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487927 ms - Host latency: 0.487927 ms (end to end 0.487927 ms, enqueue 0.0226807 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488916 ms - Host latency: 0.488916 ms (end to end 0.488916 ms, enqueue 0.0238892 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488916 ms - Host latency: 0.488916 ms (end to end 0.488916 ms, enqueue 0.0251709 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.48844 ms - Host latency: 0.48844 ms (end to end 0.48844 ms, enqueue 0.0219971 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488989 ms - Host latency: 0.488989 ms (end to end 0.488989 ms, enqueue 0.0270752 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491736 ms - Host latency: 0.491736 ms (end to end 0.491736 ms, enqueue 0.0224976 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49082 ms - Host latency: 0.49082 ms (end to end 0.49082 ms, enqueue 0.0240601 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488196 ms - Host latency: 0.488196 ms (end to end 0.488196 ms, enqueue 0.0268311 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489392 ms - Host latency: 0.489392 ms (end to end 0.489392 ms, enqueue 0.0299683 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489294 ms - Host latency: 0.489294 ms (end to end 0.489294 ms, enqueue 0.0292236 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493787 ms - Host latency: 0.493787 ms (end to end 0.493787 ms, enqueue 0.0270142 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492188 ms - Host latency: 0.492188 ms (end to end 0.492188 ms, enqueue 0.0331055 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490283 ms - Host latency: 0.490283 ms (end to end 0.490283 ms, enqueue 0.0220459 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490503 ms - Host latency: 0.490503 ms (end to end 0.490503 ms, enqueue 0.0238892 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488879 ms - Host latency: 0.488879 ms (end to end 0.488879 ms, enqueue 0.02323 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489087 ms - Host latency: 0.489087 ms (end to end 0.489087 ms, enqueue 0.025061 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.4901 ms - Host latency: 0.4901 ms (end to end 0.4901 ms, enqueue 0.0293701 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490332 ms - Host latency: 0.490332 ms (end to end 0.490332 ms, enqueue 0.0267456 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488501 ms - Host latency: 0.488501 ms (end to end 0.488501 ms, enqueue 0.0270386 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488147 ms - Host latency: 0.488147 ms (end to end 0.488147 ms, enqueue 0.0226929 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489246 ms - Host latency: 0.489246 ms (end to end 0.489246 ms, enqueue 0.0228149 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.501086 ms - Host latency: 0.501086 ms (end to end 0.501086 ms, enqueue 0.0266602 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490918 ms - Host latency: 0.490918 ms (end to end 0.490918 ms, enqueue 0.0225708 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490051 ms - Host latency: 0.490051 ms (end to end 0.490051 ms, enqueue 0.022522 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489722 ms - Host latency: 0.489722 ms (end to end 0.489722 ms, enqueue 0.0223999 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488611 ms - Host latency: 0.488611 ms (end to end 0.488611 ms, enqueue 0.0233887 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493372 ms - Host latency: 0.493372 ms (end to end 0.493372 ms, enqueue 0.0233154 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490198 ms - Host latency: 0.490198 ms (end to end 0.490198 ms, enqueue 0.0265625 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487878 ms - Host latency: 0.487878 ms (end to end 0.487878 ms, enqueue 0.0278442 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488293 ms - Host latency: 0.488293 ms (end to end 0.488293 ms, enqueue 0.0223877 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491663 ms - Host latency: 0.491663 ms (end to end 0.491663 ms, enqueue 0.023999 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490649 ms - Host latency: 0.490649 ms (end to end 0.490649 ms, enqueue 0.0228149 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496094 ms - Host latency: 0.496094 ms (end to end 0.496094 ms, enqueue 0.0240967 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496838 ms - Host latency: 0.496838 ms (end to end 0.496838 ms, enqueue 0.0266602 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489233 ms - Host latency: 0.489233 ms (end to end 0.489233 ms, enqueue 0.0228394 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490637 ms - Host latency: 0.490637 ms (end to end 0.490637 ms, enqueue 0.0246948 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488672 ms - Host latency: 0.488672 ms (end to end 0.488672 ms, enqueue 0.0226929 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489856 ms - Host latency: 0.489856 ms (end to end 0.489856 ms, enqueue 0.0228638 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488098 ms - Host latency: 0.488098 ms (end to end 0.488098 ms, enqueue 0.0223267 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489783 ms - Host latency: 0.489783 ms (end to end 0.489783 ms, enqueue 0.0266968 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490613 ms - Host latency: 0.490613 ms (end to end 0.490613 ms, enqueue 0.0255737 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490039 ms - Host latency: 0.490039 ms (end to end 0.490039 ms, enqueue 0.0223755 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494019 ms - Host latency: 0.494019 ms (end to end 0.494019 ms, enqueue 0.0239624 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489099 ms - Host latency: 0.489099 ms (end to end 0.489099 ms, enqueue 0.0252197 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.501721 ms - Host latency: 0.501721 ms (end to end 0.501721 ms, enqueue 0.0259399 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487781 ms - Host latency: 0.487781 ms (end to end 0.487781 ms, enqueue 0.0292969 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492554 ms - Host latency: 0.492554 ms (end to end 0.492554 ms, enqueue 0.021875 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492151 ms - Host latency: 0.492151 ms (end to end 0.492151 ms, enqueue 0.0250488 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488489 ms - Host latency: 0.488489 ms (end to end 0.488489 ms, enqueue 0.0289063 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490649 ms - Host latency: 0.490649 ms (end to end 0.490649 ms, enqueue 0.0251221 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495679 ms - Host latency: 0.495679 ms (end to end 0.495679 ms, enqueue 0.0224121 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.510303 ms - Host latency: 0.510303 ms (end to end 0.510303 ms, enqueue 0.0223145 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495337 ms - Host latency: 0.495337 ms (end to end 0.495337 ms, enqueue 0.0241943 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490625 ms - Host latency: 0.490625 ms (end to end 0.490625 ms, enqueue 0.0289307 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489648 ms - Host latency: 0.489648 ms (end to end 0.489648 ms, enqueue 0.0329834 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488672 ms - Host latency: 0.488672 ms (end to end 0.488672 ms, enqueue 0.0349609 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493774 ms - Host latency: 0.493774 ms (end to end 0.493774 ms, enqueue 0.0383545 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490381 ms - Host latency: 0.490381 ms (end to end 0.490381 ms, enqueue 0.0324707 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490796 ms - Host latency: 0.490796 ms (end to end 0.490796 ms, enqueue 0.0300049 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489453 ms - Host latency: 0.489453 ms (end to end 0.489453 ms, enqueue 0.0316895 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488623 ms - Host latency: 0.488623 ms (end to end 0.488623 ms, enqueue 0.0340576 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490137 ms - Host latency: 0.490137 ms (end to end 0.490137 ms, enqueue 0.0270264 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488989 ms - Host latency: 0.488989 ms (end to end 0.488989 ms, enqueue 0.0255371 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488501 ms - Host latency: 0.488501 ms (end to end 0.488501 ms, enqueue 0.0271484 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490112 ms - Host latency: 0.490112 ms (end to end 0.490112 ms, enqueue 0.0228271 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493018 ms - Host latency: 0.493018 ms (end to end 0.493018 ms, enqueue 0.023999 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490918 ms - Host latency: 0.490918 ms (end to end 0.490918 ms, enqueue 0.0227783 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.499536 ms - Host latency: 0.499536 ms (end to end 0.499536 ms, enqueue 0.0249512 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492456 ms - Host latency: 0.492456 ms (end to end 0.492456 ms, enqueue 0.0220703 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489331 ms - Host latency: 0.489331 ms (end to end 0.489331 ms, enqueue 0.0222168 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490088 ms - Host latency: 0.490088 ms (end to end 0.490088 ms, enqueue 0.0238525 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496313 ms - Host latency: 0.496313 ms (end to end 0.496313 ms, enqueue 0.0229248 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493359 ms - Host latency: 0.493359 ms (end to end 0.493359 ms, enqueue 0.0224854 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489185 ms - Host latency: 0.489185 ms (end to end 0.489185 ms, enqueue 0.0227295 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489233 ms - Host latency: 0.489233 ms (end to end 0.489233 ms, enqueue 0.0257324 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488965 ms - Host latency: 0.488965 ms (end to end 0.488965 ms, enqueue 0.0240234 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487842 ms - Host latency: 0.487842 ms (end to end 0.487842 ms, enqueue 0.0256592 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489502 ms - Host latency: 0.489502 ms (end to end 0.489502 ms, enqueue 0.0262695 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488232 ms - Host latency: 0.488232 ms (end to end 0.488232 ms, enqueue 0.0234131 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489844 ms - Host latency: 0.489844 ms (end to end 0.489844 ms, enqueue 0.0261475 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488599 ms - Host latency: 0.488599 ms (end to end 0.488599 ms, enqueue 0.0351074 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488477 ms - Host latency: 0.488477 ms (end to end 0.488477 ms, enqueue 0.0273926 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.504102 ms - Host latency: 0.504102 ms (end to end 0.504102 ms, enqueue 0.022583 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490015 ms - Host latency: 0.490015 ms (end to end 0.490015 ms, enqueue 0.0258545 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490503 ms - Host latency: 0.490503 ms (end to end 0.490503 ms, enqueue 0.026416 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.536084 ms - Host latency: 0.536084 ms (end to end 0.536084 ms, enqueue 0.0244873 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490747 ms - Host latency: 0.490747 ms (end to end 0.490747 ms, enqueue 0.0256348 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489111 ms - Host latency: 0.489111 ms (end to end 0.489111 ms, enqueue 0.0244873 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489771 ms - Host latency: 0.489771 ms (end to end 0.489771 ms, enqueue 0.0231445 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492578 ms - Host latency: 0.492578 ms (end to end 0.492578 ms, enqueue 0.0224365 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488721 ms - Host latency: 0.488721 ms (end to end 0.488721 ms, enqueue 0.0253662 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.508472 ms - Host latency: 0.508472 ms (end to end 0.508472 ms, enqueue 0.023584 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489771 ms - Host latency: 0.489771 ms (end to end 0.489771 ms, enqueue 0.0241943 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492139 ms - Host latency: 0.492139 ms (end to end 0.492139 ms, enqueue 0.0298828 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493188 ms - Host latency: 0.493188 ms (end to end 0.493188 ms, enqueue 0.0223877 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49729 ms - Host latency: 0.49729 ms (end to end 0.49729 ms, enqueue 0.0226563 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487842 ms - Host latency: 0.487842 ms (end to end 0.487842 ms, enqueue 0.0275635 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488647 ms - Host latency: 0.488647 ms (end to end 0.488647 ms, enqueue 0.0238037 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489258 ms - Host latency: 0.489258 ms (end to end 0.489258 ms, enqueue 0.0226563 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488281 ms - Host latency: 0.488281 ms (end to end 0.488281 ms, enqueue 0.0227295 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49187 ms - Host latency: 0.49187 ms (end to end 0.49187 ms, enqueue 0.0302979 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488599 ms - Host latency: 0.488599 ms (end to end 0.488599 ms, enqueue 0.0242676 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487964 ms - Host latency: 0.487964 ms (end to end 0.487964 ms, enqueue 0.0244385 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488355 ms - Host latency: 0.488355 ms (end to end 0.488355 ms, enqueue 0.0224609 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492627 ms - Host latency: 0.492627 ms (end to end 0.492627 ms, enqueue 0.0223877 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490405 ms - Host latency: 0.490405 ms (end to end 0.490405 ms, enqueue 0.0266602 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488013 ms - Host latency: 0.488013 ms (end to end 0.488013 ms, enqueue 0.0225586 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488696 ms - Host latency: 0.488696 ms (end to end 0.488696 ms, enqueue 0.0225342 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488379 ms - Host latency: 0.488379 ms (end to end 0.488379 ms, enqueue 0.0239014 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492798 ms - Host latency: 0.492798 ms (end to end 0.492798 ms, enqueue 0.0251221 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490063 ms - Host latency: 0.490063 ms (end to end 0.490063 ms, enqueue 0.0232178 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488989 ms - Host latency: 0.488989 ms (end to end 0.488989 ms, enqueue 0.0226074 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488403 ms - Host latency: 0.488403 ms (end to end 0.488403 ms, enqueue 0.0229736 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489502 ms - Host latency: 0.489502 ms (end to end 0.489502 ms, enqueue 0.0222412 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.50166 ms - Host latency: 0.50166 ms (end to end 0.50166 ms, enqueue 0.024585 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488184 ms - Host latency: 0.488184 ms (end to end 0.488184 ms, enqueue 0.0221191 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489624 ms - Host latency: 0.489624 ms (end to end 0.489624 ms, enqueue 0.0243896 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489185 ms - Host latency: 0.489185 ms (end to end 0.489185 ms, enqueue 0.0225342 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488892 ms - Host latency: 0.488892 ms (end to end 0.488892 ms, enqueue 0.0226318 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488525 ms - Host latency: 0.488525 ms (end to end 0.488525 ms, enqueue 0.0224121 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488843 ms - Host latency: 0.488843 ms (end to end 0.488843 ms, enqueue 0.0227295 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489722 ms - Host latency: 0.489722 ms (end to end 0.489722 ms, enqueue 0.0251953 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488281 ms - Host latency: 0.488281 ms (end to end 0.488281 ms, enqueue 0.0257324 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494775 ms - Host latency: 0.494775 ms (end to end 0.494775 ms, enqueue 0.0233643 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489771 ms - Host latency: 0.489771 ms (end to end 0.489771 ms, enqueue 0.0270508 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.48916 ms - Host latency: 0.48916 ms (end to end 0.48916 ms, enqueue 0.0222412 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491772 ms - Host latency: 0.491772 ms (end to end 0.491772 ms, enqueue 0.0227783 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489185 ms - Host latency: 0.489185 ms (end to end 0.489185 ms, enqueue 0.0224365 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490283 ms - Host latency: 0.490283 ms (end to end 0.490283 ms, enqueue 0.0272949 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488721 ms - Host latency: 0.488721 ms (end to end 0.488721 ms, enqueue 0.0251465 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489404 ms - Host latency: 0.489404 ms (end to end 0.489404 ms, enqueue 0.0220703 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488501 ms - Host latency: 0.488501 ms (end to end 0.488501 ms, enqueue 0.0252686 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488623 ms - Host latency: 0.488623 ms (end to end 0.488623 ms, enqueue 0.025 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488477 ms - Host latency: 0.488477 ms (end to end 0.488477 ms, enqueue 0.0228027 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491187 ms - Host latency: 0.491187 ms (end to end 0.491187 ms, enqueue 0.027124 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.48999 ms - Host latency: 0.48999 ms (end to end 0.48999 ms, enqueue 0.0220947 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.4896 ms - Host latency: 0.4896 ms (end to end 0.4896 ms, enqueue 0.0221191 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.502661 ms - Host latency: 0.502661 ms (end to end 0.502661 ms, enqueue 0.0230713 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488452 ms - Host latency: 0.488452 ms (end to end 0.488452 ms, enqueue 0.0225342 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49043 ms - Host latency: 0.49043 ms (end to end 0.49043 ms, enqueue 0.0221924 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491626 ms - Host latency: 0.491626 ms (end to end 0.491626 ms, enqueue 0.0265137 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489282 ms - Host latency: 0.489282 ms (end to end 0.489282 ms, enqueue 0.0225098 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492456 ms - Host latency: 0.492456 ms (end to end 0.492456 ms, enqueue 0.0298584 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494067 ms - Host latency: 0.494067 ms (end to end 0.494067 ms, enqueue 0.0302246 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488452 ms - Host latency: 0.488452 ms (end to end 0.488452 ms, enqueue 0.0234863 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.4927 ms - Host latency: 0.4927 ms (end to end 0.4927 ms, enqueue 0.0239258 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493335 ms - Host latency: 0.493335 ms (end to end 0.493335 ms, enqueue 0.0223877 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492603 ms - Host latency: 0.492603 ms (end to end 0.492603 ms, enqueue 0.0312744 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488257 ms - Host latency: 0.488257 ms (end to end 0.488257 ms, enqueue 0.0248779 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488281 ms - Host latency: 0.488281 ms (end to end 0.488281 ms, enqueue 0.0222412 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.48877 ms - Host latency: 0.48877 ms (end to end 0.48877 ms, enqueue 0.0229492 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489136 ms - Host latency: 0.489136 ms (end to end 0.489136 ms, enqueue 0.0223389 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491357 ms - Host latency: 0.491357 ms (end to end 0.491357 ms, enqueue 0.0291504 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490234 ms - Host latency: 0.490234 ms (end to end 0.490234 ms, enqueue 0.0227295 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491626 ms - Host latency: 0.491626 ms (end to end 0.491626 ms, enqueue 0.0233643 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489087 ms - Host latency: 0.489087 ms (end to end 0.489087 ms, enqueue 0.0235107 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490112 ms - Host latency: 0.490112 ms (end to end 0.490112 ms, enqueue 0.0248047 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492529 ms - Host latency: 0.492529 ms (end to end 0.492529 ms, enqueue 0.0274902 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488086 ms - Host latency: 0.488086 ms (end to end 0.488086 ms, enqueue 0.02229 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489868 ms - Host latency: 0.489868 ms (end to end 0.489868 ms, enqueue 0.0225342 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.503491 ms - Host latency: 0.503491 ms (end to end 0.503491 ms, enqueue 0.02229 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49021 ms - Host latency: 0.49021 ms (end to end 0.49021 ms, enqueue 0.0225098 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.496265 ms - Host latency: 0.496265 ms (end to end 0.496265 ms, enqueue 0.023877 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487915 ms - Host latency: 0.487915 ms (end to end 0.487915 ms, enqueue 0.0225098 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489526 ms - Host latency: 0.489526 ms (end to end 0.489526 ms, enqueue 0.0240967 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.511792 ms - Host latency: 0.511792 ms (end to end 0.511792 ms, enqueue 0.0243408 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489038 ms - Host latency: 0.489038 ms (end to end 0.489038 ms, enqueue 0.0238037 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488818 ms - Host latency: 0.488818 ms (end to end 0.488818 ms, enqueue 0.0237549 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494458 ms - Host latency: 0.494458 ms (end to end 0.494458 ms, enqueue 0.0241455 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494409 ms - Host latency: 0.494409 ms (end to end 0.494409 ms, enqueue 0.0234131 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488989 ms - Host latency: 0.488989 ms (end to end 0.488989 ms, enqueue 0.0244629 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488184 ms - Host latency: 0.488184 ms (end to end 0.488184 ms, enqueue 0.0284668 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.48916 ms - Host latency: 0.48916 ms (end to end 0.48916 ms, enqueue 0.0222168 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489648 ms - Host latency: 0.489648 ms (end to end 0.489648 ms, enqueue 0.0221436 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490942 ms - Host latency: 0.490942 ms (end to end 0.490942 ms, enqueue 0.0221436 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488916 ms - Host latency: 0.488916 ms (end to end 0.488916 ms, enqueue 0.0222412 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490186 ms - Host latency: 0.490186 ms (end to end 0.490186 ms, enqueue 0.0228516 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488965 ms - Host latency: 0.488965 ms (end to end 0.488965 ms, enqueue 0.0263184 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489795 ms - Host latency: 0.489795 ms (end to end 0.489795 ms, enqueue 0.0254639 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489014 ms - Host latency: 0.489014 ms (end to end 0.489014 ms, enqueue 0.0221191 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487402 ms - Host latency: 0.487402 ms (end to end 0.487402 ms, enqueue 0.0256592 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488916 ms - Host latency: 0.488916 ms (end to end 0.488916 ms, enqueue 0.0231934 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488647 ms - Host latency: 0.488647 ms (end to end 0.488647 ms, enqueue 0.0221924 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.508496 ms - Host latency: 0.508496 ms (end to end 0.508496 ms, enqueue 0.0279297 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488501 ms - Host latency: 0.488501 ms (end to end 0.488501 ms, enqueue 0.022168 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489429 ms - Host latency: 0.489429 ms (end to end 0.489429 ms, enqueue 0.0230469 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488037 ms - Host latency: 0.488037 ms (end to end 0.488037 ms, enqueue 0.0224365 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488281 ms - Host latency: 0.488281 ms (end to end 0.488281 ms, enqueue 0.021875 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491016 ms - Host latency: 0.491016 ms (end to end 0.491016 ms, enqueue 0.0303955 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488989 ms - Host latency: 0.488989 ms (end to end 0.488989 ms, enqueue 0.022583 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494287 ms - Host latency: 0.494287 ms (end to end 0.494287 ms, enqueue 0.0243652 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.50437 ms - Host latency: 0.50437 ms (end to end 0.50437 ms, enqueue 0.0296143 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.498193 ms - Host latency: 0.498193 ms (end to end 0.498193 ms, enqueue 0.0225098 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495654 ms - Host latency: 0.495654 ms (end to end 0.495654 ms, enqueue 0.0233154 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494775 ms - Host latency: 0.494775 ms (end to end 0.494775 ms, enqueue 0.0239258 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490039 ms - Host latency: 0.490039 ms (end to end 0.490039 ms, enqueue 0.0228027 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.48877 ms - Host latency: 0.48877 ms (end to end 0.48877 ms, enqueue 0.022583 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489722 ms - Host latency: 0.489722 ms (end to end 0.489722 ms, enqueue 0.0241455 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487085 ms - Host latency: 0.487085 ms (end to end 0.487085 ms, enqueue 0.0240479 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489014 ms - Host latency: 0.489014 ms (end to end 0.489014 ms, enqueue 0.0222412 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489404 ms - Host latency: 0.489404 ms (end to end 0.489404 ms, enqueue 0.0311279 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.48938 ms - Host latency: 0.48938 ms (end to end 0.48938 ms, enqueue 0.0222412 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493726 ms - Host latency: 0.493726 ms (end to end 0.493726 ms, enqueue 0.0259766 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.493359 ms - Host latency: 0.493359 ms (end to end 0.493359 ms, enqueue 0.0231689 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489697 ms - Host latency: 0.489697 ms (end to end 0.489697 ms, enqueue 0.0254395 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489478 ms - Host latency: 0.489478 ms (end to end 0.489478 ms, enqueue 0.0251465 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.48877 ms - Host latency: 0.48877 ms (end to end 0.48877 ms, enqueue 0.0257812 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492456 ms - Host latency: 0.492456 ms (end to end 0.492456 ms, enqueue 0.0224854 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.494458 ms - Host latency: 0.494458 ms (end to end 0.494458 ms, enqueue 0.0257324 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489111 ms - Host latency: 0.489111 ms (end to end 0.489111 ms, enqueue 0.0330811 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49978 ms - Host latency: 0.49978 ms (end to end 0.49978 ms, enqueue 0.0223145 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495264 ms - Host latency: 0.495264 ms (end to end 0.495264 ms, enqueue 0.0226074 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492456 ms - Host latency: 0.492456 ms (end to end 0.492456 ms, enqueue 0.0259277 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490332 ms - Host latency: 0.490332 ms (end to end 0.490332 ms, enqueue 0.0257812 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489429 ms - Host latency: 0.489429 ms (end to end 0.489429 ms, enqueue 0.0234863 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489038 ms - Host latency: 0.489038 ms (end to end 0.489038 ms, enqueue 0.0240967 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489673 ms - Host latency: 0.489673 ms (end to end 0.489673 ms, enqueue 0.022583 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492114 ms - Host latency: 0.492114 ms (end to end 0.492114 ms, enqueue 0.0248047 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488477 ms - Host latency: 0.488477 ms (end to end 0.488477 ms, enqueue 0.0222412 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492261 ms - Host latency: 0.492261 ms (end to end 0.492261 ms, enqueue 0.0248535 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489478 ms - Host latency: 0.489478 ms (end to end 0.489478 ms, enqueue 0.0239014 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491895 ms - Host latency: 0.491895 ms (end to end 0.491895 ms, enqueue 0.0280273 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488794 ms - Host latency: 0.488794 ms (end to end 0.488794 ms, enqueue 0.0223145 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.489136 ms - Host latency: 0.489136 ms (end to end 0.489136 ms, enqueue 0.0227783 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.497046 ms - Host latency: 0.497046 ms (end to end 0.497046 ms, enqueue 0.0235107 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491162 ms - Host latency: 0.491162 ms (end to end 0.491162 ms, enqueue 0.0225586 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488281 ms - Host latency: 0.488281 ms (end to end 0.488281 ms, enqueue 0.0225342 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49043 ms - Host latency: 0.49043 ms (end to end 0.49043 ms, enqueue 0.0220703 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488672 ms - Host latency: 0.488672 ms (end to end 0.488672 ms, enqueue 0.0270508 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488818 ms - Host latency: 0.488818 ms (end to end 0.488818 ms, enqueue 0.0331787 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.502759 ms - Host latency: 0.502759 ms (end to end 0.502759 ms, enqueue 0.0235596 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.490771 ms - Host latency: 0.490771 ms (end to end 0.490771 ms, enqueue 0.0223877 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488574 ms - Host latency: 0.488574 ms (end to end 0.488574 ms, enqueue 0.0232178 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488721 ms - Host latency: 0.488721 ms (end to end 0.488721 ms, enqueue 0.0258545 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491138 ms - Host latency: 0.491138 ms (end to end 0.491138 ms, enqueue 0.0268311 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.492065 ms - Host latency: 0.492065 ms (end to end 0.492065 ms, enqueue 0.0264648 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.497876 ms - Host latency: 0.497876 ms (end to end 0.497876 ms, enqueue 0.0276123 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49707 ms - Host latency: 0.49707 ms (end to end 0.49707 ms, enqueue 0.0256836 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495898 ms - Host latency: 0.495898 ms (end to end 0.495898 ms, enqueue 0.026416 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.49834 ms - Host latency: 0.49834 ms (end to end 0.49834 ms, enqueue 0.0226563 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.499536 ms - Host latency: 0.499536 ms (end to end 0.499536 ms, enqueue 0.0302979 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.495654 ms - Host latency: 0.495654 ms (end to end 0.495654 ms, enqueue 0.0233154 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487939 ms - Host latency: 0.487939 ms (end to end 0.487939 ms, enqueue 0.0227051 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488403 ms - Host latency: 0.488403 ms (end to end 0.488403 ms, enqueue 0.0229004 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.487817 ms - Host latency: 0.487817 ms (end to end 0.487817 ms, enqueue 0.0226563 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.491284 ms - Host latency: 0.491284 ms (end to end 0.491284 ms, enqueue 0.0290283 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488403 ms - Host latency: 0.488403 ms (end to end 0.488403 ms, enqueue 0.0224121 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488916 ms - Host latency: 0.488916 ms (end to end 0.488916 ms, enqueue 0.0242432 ms) [10/04/2021-21:34:46] [I] Average on 10 runs - GPU latency: 0.488721 ms - Host latency: 0.488721 ms (end to end 0.488721 ms, enqueue 0.0305908 ms) [10/04/2021-21:34:46] [I] [10/04/2021-21:34:46] [I] === Performance summary === [10/04/2021-21:34:46] [I] Throughput: 1719.62 qps [10/04/2021-21:34:46] [I] Latency: min = 0.480957 ms, max = 0.942078 ms, mean = 0.495692 ms, median = 0.489502 ms, percentile(99%) = 0.59082 ms [10/04/2021-21:34:46] [I] End-to-End Host Latency: min = 0.480957 ms, max = 0.942078 ms, mean = 0.495692 ms, median = 0.489502 ms, percentile(99%) = 0.59082 ms [10/04/2021-21:34:46] [I] Enqueue Time: min = 0.0212402 ms, max = 0.127441 ms, mean = 0.0247869 ms, median = 0.0223999 ms, percentile(99%) = 0.0568848 ms [10/04/2021-21:34:46] [I] H2D Latency: min = 0 ms, max = 0 ms, mean = 0 ms, median = 0 ms, percentile(99%) = 0 ms [10/04/2021-21:34:46] [I] GPU Compute Time: min = 0.480957 ms, max = 0.942078 ms, mean = 0.495692 ms, median = 0.489502 ms, percentile(99%) = 0.59082 ms [10/04/2021-21:34:46] [I] D2H Latency: min = 0 ms, max = 0 ms, mean = 0 ms, median = 0 ms, percentile(99%) = 0 ms [10/04/2021-21:34:46] [I] Total Host Walltime: 3.00066 s [10/04/2021-21:34:46] [I] Total GPU Compute Time: 2.55777 s [10/04/2021-21:34:46] [I] Explanations of the performance metrics are printed in the verbose logs. [10/04/2021-21:34:46] [V] [10/04/2021-21:34:46] [V] === Explanations of the performance metrics === [10/04/2021-21:34:46] [V] Total Host Walltime: the host walltime from when the first query (after warmups) is enqueued to when the last query is completed. [10/04/2021-21:34:46] [V] GPU Compute Time: the GPU latency to execute the kernels for a query. [10/04/2021-21:34:46] [V] Total GPU Compute Time: the summation of the GPU Compute Time of all the queries. If this is significantly shorter than Total Host Walltime, the GPU may be under-utilized because of host-side overheads or data transfers. [10/04/2021-21:34:46] [V] Throughput: the observed throughput computed by dividing the number of queries by the Total Host Walltime. If this is significantly lower than the reciprocal of GPU Compute Time, the GPU may be under-utilized because of host-side overheads or data transfers. [10/04/2021-21:34:46] [V] Enqueue Time: the host latency to enqueue a query. If this is longer than GPU Compute Time, the GPU may be under-utilized. [10/04/2021-21:34:46] [V] H2D Latency: the latency for host-to-device data transfers for input tensors of a single query. [10/04/2021-21:34:46] [V] D2H Latency: the latency for device-to-host data transfers for output tensors of a single query. [10/04/2021-21:34:46] [V] Latency: the summation of H2D Latency, GPU Compute Time, and D2H Latency. This is the latency to infer a single query. [10/04/2021-21:34:46] [V] End-to-End Host Latency: the duration from when the H2D of a query is called to when the D2H of the same query is completed, which includes the latency to wait for the completion of the previous query. This is the latency of a query if multiple queries are enqueued consecutively. [10/04/2021-21:34:46] [I] &&&& PASSED TensorRT.trtexec [TensorRT v8003] # C:\Program Files\NVIDIA GPU Computing Toolkit\TensorRT-8.0.3.4\bin\trtexec.exe --onnx=resnet.onnx --useCudaGraph --threads --noDataTransfers --verbose --best [10/04/2021-21:34:46] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 9837, GPU 1175 (MiB)