&&&& RUNNING TensorRT.trtexec [TensorRT v8400] # ./trtexec --onnx=/home/omnisky/yanxf/GlobalPointer-main/tf15_models/output5.onnx --saveEngine=/home/omnisky/yanxf/GlobalPointer-main/tf15_models/output5.trt --verbose --workspace=6000 [04/08/2022-14:45:37] [W] --workspace flag has been deprecated by --memPoolSize flag. [04/08/2022-14:45:37] [I] === Model Options === [04/08/2022-14:45:37] [I] Format: ONNX [04/08/2022-14:45:37] [I] Model: /home/omnisky/yanxf/GlobalPointer-main/tf15_models/output5.onnx [04/08/2022-14:45:37] [I] Output: [04/08/2022-14:45:37] [I] === Build Options === [04/08/2022-14:45:37] [I] Max batch: explicit batch [04/08/2022-14:45:37] [I] Memory Pools: workspace: 6000 MiB, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default [04/08/2022-14:45:37] [I] minTiming: 1 [04/08/2022-14:45:37] [I] avgTiming: 8 [04/08/2022-14:45:37] [I] Precision: FP32 [04/08/2022-14:45:37] [I] LayerPrecisions: [04/08/2022-14:45:37] [I] Calibration: [04/08/2022-14:45:37] [I] Refit: Disabled [04/08/2022-14:45:37] [I] Sparsity: Disabled [04/08/2022-14:45:37] [I] Safe mode: Disabled [04/08/2022-14:45:37] [I] DirectIO mode: Disabled [04/08/2022-14:45:37] [I] Restricted mode: Disabled [04/08/2022-14:45:37] [I] Save engine: /home/omnisky/yanxf/GlobalPointer-main/tf15_models/output5.trt [04/08/2022-14:45:37] [I] Load engine: [04/08/2022-14:45:37] [I] Profiling verbosity: 0 [04/08/2022-14:45:37] [I] Tactic sources: Using default tactic sources [04/08/2022-14:45:37] [I] timingCacheMode: local [04/08/2022-14:45:37] [I] timingCacheFile: [04/08/2022-14:45:37] [I] Input(s)s format: fp32:CHW [04/08/2022-14:45:37] [I] Output(s)s format: fp32:CHW [04/08/2022-14:45:37] [I] Input build shapes: model [04/08/2022-14:45:37] [I] Input calibration shapes: model [04/08/2022-14:45:37] [I] === System Options === [04/08/2022-14:45:37] [I] Device: 0 [04/08/2022-14:45:37] [I] DLACore: [04/08/2022-14:45:37] [I] Plugins: [04/08/2022-14:45:37] [I] === Inference Options === [04/08/2022-14:45:37] [I] Batch: Explicit [04/08/2022-14:45:37] [I] Input inference shapes: model [04/08/2022-14:45:37] [I] Iterations: 10 [04/08/2022-14:45:37] [I] Duration: 3s (+ 200ms warm up) [04/08/2022-14:45:37] [I] Sleep time: 0ms [04/08/2022-14:45:37] [I] Idle time: 0ms [04/08/2022-14:45:37] [I] Streams: 1 [04/08/2022-14:45:37] [I] ExposeDMA: Disabled [04/08/2022-14:45:37] [I] Data transfers: Enabled [04/08/2022-14:45:37] [I] Spin-wait: Disabled [04/08/2022-14:45:37] [I] Multithreading: Disabled [04/08/2022-14:45:37] [I] CUDA Graph: Disabled [04/08/2022-14:45:37] [I] Separate profiling: Disabled [04/08/2022-14:45:37] [I] Time Deserialize: Disabled [04/08/2022-14:45:37] [I] Time Refit: Disabled [04/08/2022-14:45:37] [I] Skip inference: Disabled [04/08/2022-14:45:37] [I] Inputs: [04/08/2022-14:45:37] [I] === Reporting Options === [04/08/2022-14:45:37] [I] Verbose: Enabled [04/08/2022-14:45:37] [I] Averages: 10 inferences [04/08/2022-14:45:37] [I] Percentile: 99 [04/08/2022-14:45:37] [I] Dump refittable layers:Disabled [04/08/2022-14:45:37] [I] Dump output: Disabled [04/08/2022-14:45:37] [I] Profile: Disabled [04/08/2022-14:45:37] [I] Export timing to JSON file: [04/08/2022-14:45:37] [I] Export output to JSON file: [04/08/2022-14:45:37] [I] Export profile to JSON file: [04/08/2022-14:45:37] [I] [04/08/2022-14:45:37] [I] === Device Information === [04/08/2022-14:45:37] [I] Selected Device: NVIDIA A40 [04/08/2022-14:45:37] [I] Compute Capability: 8.6 [04/08/2022-14:45:37] [I] SMs: 84 [04/08/2022-14:45:37] [I] Compute Clock Rate: 1.74 GHz [04/08/2022-14:45:37] [I] Device Global Memory: 45434 MiB [04/08/2022-14:45:37] [I] Shared Memory per SM: 100 KiB [04/08/2022-14:45:37] [I] Memory Bus Width: 384 bits (ECC enabled) [04/08/2022-14:45:37] [I] Memory Clock Rate: 7.251 GHz [04/08/2022-14:45:37] [I] [04/08/2022-14:45:37] [I] TensorRT version: 8.4.0 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::NMS_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::Reorg_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::Region_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::Clip_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::LReLU_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::Normalize_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::ScatterND version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::RPROI_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::CropAndResize version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::EfficientNMS_Explicit_TF_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::EfficientNMS_Implicit_TF_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::Proposal version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::Split version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1 [04/08/2022-14:45:37] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 2 [04/08/2022-14:45:38] [I] [TRT] [MemUsageChange] Init CUDA: CPU +358, GPU +0, now: CPU 366, GPU 457 (MiB) [04/08/2022-14:45:38] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 384 MiB, GPU 457 MiB [04/08/2022-14:45:39] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 759 MiB, GPU 579 MiB [04/08/2022-14:45:39] [I] Start parsing network model [04/08/2022-14:45:39] [I] [TRT] ---------------------------------------------------------------- [04/08/2022-14:45:39] [I] [TRT] Input filename: /home/omnisky/yanxf/GlobalPointer-main/tf15_models/output5.onnx [04/08/2022-14:45:39] [I] [TRT] ONNX IR version: 0.0.7 [04/08/2022-14:45:39] [I] [TRT] Opset version: 13 [04/08/2022-14:45:39] [I] [TRT] Producer name: tf2onnx [04/08/2022-14:45:39] [I] [TRT] Producer version: 1.9.3 [04/08/2022-14:45:39] [I] [TRT] Domain: [04/08/2022-14:45:39] [I] [TRT] Model version: 0 [04/08/2022-14:45:39] [I] [TRT] Doc string: [04/08/2022-14:45:39] [I] [TRT] ---------------------------------------------------------------- [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::GridAnchor_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::GridAnchorRect_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::NMS_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::Reorg_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::Region_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::Clip_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::LReLU_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::PriorBox_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::Normalize_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::ScatterND version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::RPROI_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::BatchedNMS_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::BatchedNMSDynamic_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::FlattenConcat_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::CropAndResize version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::DetectionLayer_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::EfficientNMS_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::EfficientNMS_ONNX_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::EfficientNMS_Explicit_TF_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::EfficientNMS_Implicit_TF_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::Proposal version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::ProposalLayer_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::PyramidROIAlign_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::ResizeNearest_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::Split version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::SpecialSlice_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 1 [04/08/2022-14:45:39] [V] [TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 2 [04/08/2022-14:45:39] [V] [TRT] Adding network input: Input-Segment with dtype: float32, dimensions: (-1, -1) [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Input-Segment for ONNX tensor: Input-Segment [04/08/2022-14:45:39] [V] [TRT] Adding network input: Input-Token with dtype: float32, dimensions: (-1, -1) [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Input-Token for ONNX tensor: Input-Token [04/08/2022-14:45:39] [V] [TRT] Importing initializer: zero_const__739 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: slice_axes__3477 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: slice_axes__3470 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: slice_axes__3341 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: one__3489 [04/08/2022-14:45:39] [W] [TRT] onnx2trt_utils.cpp:365: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [04/08/2022-14:45:39] [V] [TRT] Importing initializer: min_const__738 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: max_const__737 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: end_masked__3476 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: end_masked__3469 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: end_masked__3378 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: end_masked__3353 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_starts__3484 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_starts__12 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_fold_opt__4058 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_fold_opt__3935 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_fold_opt__3918 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_ends__3485 [04/08/2022-14:45:39] [V] [TRT] Weight at index 0: 9223372036854775807 is out of range. Clamping to: 2147483647 [04/08/2022-14:45:39] [W] [TRT] onnx2trt_utils.cpp:391: One or more weights outside the range of INT32 was clamped [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_ends__13 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_ends__10 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: const_axes__11 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: begin_masked__3475 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: begin_masked__3468 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: begin_masked__3396 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9/stack_2:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8/stack_2:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10/stack_2:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2/stack_2:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims_2:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange/start:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange/delta:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8/y:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_shape__4161 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1_shape__4164 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart/num_lower:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Const:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Embedding-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Embedding-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape_shape__4182 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: Func/StatefulPartitionedCall/input/_3:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: Func/StatefulPartitionedCall/input/_2:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: ConstantFolding/StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1_recip:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:39] [V] [TRT] Importing initializer: Const__3624 [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual [Equal] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Input-Token [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/NotEqual [Equal] inputs: [Input-Token -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/NotEqual [Equal] outputs: [StatefulPartitionedCall/model_2/Embedding-Token/NotEqual:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6 [Not] inputs: [StatefulPartitionedCall/model_2/Embedding-Token/NotEqual:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6 [Not] outputs: [StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__11 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__10 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__11 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6:0 -> (-1, -1)[BOOL]], [const_axes__11 -> (2)[INT32]], [const_ends__10 -> (2)[INT32]], [const_axes__11 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Embedding-Token/strided_slice_1:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__12 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__13 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__11 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Embedding-Token/NotEqual__6:0 -> (-1, -1)[BOOL]], [const_starts__12 -> (2)[INT32]], [const_ends__13 -> (2)[INT32]], [const_axes__11 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Embedding-Token/strided_slice:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Embedding-Token/strided_slice:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape__15 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape__15 [Cast] inputs: [StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape__15 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape__15 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape__15:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape__15:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape__15 [Cast] outputs: [StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape__15:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token/ones_like__16 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape__15:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/ones_like__16 [Cast] inputs: [StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Shape__15:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token/ones_like__16 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token/ones_like__16 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token/ones_like__16:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token/ones_like__16:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/ones_like__16 [Cast] outputs: [StatefulPartitionedCall/model_2/Embedding-Token/ones_like__16:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token/ones_like [Expand] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Const:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/ones_like__16:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/ones_like [Expand] inputs: [StatefulPartitionedCall/model_2/Embedding-Token/ones_like/Const:0 -> ()[BOOL]], [StatefulPartitionedCall/model_2/Embedding-Token/ones_like__16:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token/ones_like for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token/ones_like [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token/ones_like:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token/ones_like:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/ones_like [Expand] outputs: [StatefulPartitionedCall/model_2/Embedding-Token/ones_like:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token/concat [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/ones_like:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Embedding-Token/ones_like:0 -> (-1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Embedding-Token/strided_slice_1:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token/concat for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token/concat [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token/concat:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Embedding-Token/concat:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token/Cast [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Input-Token [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/Cast [Cast] inputs: [Input-Token -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token/Cast for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token/Cast [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token/Cast:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Embedding-Token/Cast:0 -> (-1, -1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token-Segment/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/concat:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token-Segment/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Embedding-Token/concat:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token-Segment/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token-Segment/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token-Segment/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token-Segment/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token-Segment/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Embedding-Token-Segment/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Segment/Cast [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Input-Segment [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Segment/Cast [Cast] inputs: [Input-Segment -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Segment/Cast for ONNX node: StatefulPartitionedCall/model_2/Embedding-Segment/Cast [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Segment/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Segment/Cast:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Segment/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Embedding-Segment/Cast:0 -> (-1, -1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Less__747 [Less] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8/stack_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: zero_const__739 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Less__747 [Less] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8/stack_2:0 -> (2)[INT32]], [zero_const__739 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8/stack_2:0 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8/stack_2:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: zero_const__739 for ONNX node: zero_const__739 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Less__747 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Less__747 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Less__747:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Less__747:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Less__747 [Less] outputs: [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Less__747:0 -> (2)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__20 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token-Segment/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Not__20 [Not] inputs: [StatefulPartitionedCall/model_2/Embedding-Token-Segment/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__20 for ONNX node: Not__20 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__20:0 for ONNX tensor: Not__20:0 [04/08/2022-14:45:39] [V] [TRT] Not__20 [Not] outputs: [Not__20:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Neg__3505 [Neg] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart/num_lower:0 [04/08/2022-14:45:39] [V] [TRT] Neg__3505 [Neg] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart/num_lower:0 -> ()[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart/num_lower:0 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart/num_lower:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Neg__3505 for ONNX node: Neg__3505 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Neg__3505:0 for ONNX tensor: Neg__3505:0 [04/08/2022-14:45:39] [V] [TRT] Neg__3505 [Neg] outputs: [Neg__3505:0 -> ()[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Segment/embedding_lookup [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Func/StatefulPartitionedCall/input/_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Segment/Cast:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Segment/embedding_lookup [Gather] inputs: [Func/StatefulPartitionedCall/input/_3:0 -> (2, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Segment/Cast:0 -> (-1, -1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Func/StatefulPartitionedCall/input/_3:0 for ONNX node: Func/StatefulPartitionedCall/input/_3:0 [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Segment/embedding_lookup for ONNX node: StatefulPartitionedCall/model_2/Embedding-Segment/embedding_lookup [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Segment/embedding_lookup:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Segment/embedding_lookup:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Segment/embedding_lookup [Gather] outputs: [StatefulPartitionedCall/model_2/Embedding-Segment/embedding_lookup:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token/embedding_lookup [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Func/StatefulPartitionedCall/input/_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/Cast:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/embedding_lookup [Gather] inputs: [Func/StatefulPartitionedCall/input/_2:0 -> (21128, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Token/Cast:0 -> (-1, -1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Func/StatefulPartitionedCall/input/_2:0 for ONNX node: Func/StatefulPartitionedCall/input/_2:0 [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token/embedding_lookup for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token/embedding_lookup [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token/embedding_lookup:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token/embedding_lookup:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token/embedding_lookup [Gather] outputs: [StatefulPartitionedCall/model_2/Embedding-Token/embedding_lookup:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token-Segment/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token/embedding_lookup:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Segment/embedding_lookup:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token-Segment/add [Add] inputs: [StatefulPartitionedCall/model_2/Embedding-Token/embedding_lookup:0 -> (-1, -1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Segment/embedding_lookup:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token-Segment/add for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token-Segment/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token-Segment/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token-Segment/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token-Segment/add [Add] outputs: [StatefulPartitionedCall/model_2/Embedding-Token-Segment/add:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Position/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token-Segment/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Embedding-Token-Segment/add:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Position/Shape for ONNX node: StatefulPartitionedCall/model_2/Embedding-Position/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Position/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Position/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Embedding-Position/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Position/Shape__723 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Position/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/Shape__723 [Cast] inputs: [StatefulPartitionedCall/model_2/Embedding-Position/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Position/Shape__723 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Position/Shape__723 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Position/Shape__723:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Position/Shape__723:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/Shape__723 [Cast] outputs: [StatefulPartitionedCall/model_2/Embedding-Position/Shape__723:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Position/Shape__723:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Embedding-Position/Shape__723:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3/stack_1_Concat__733 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: zero_const__739 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3/stack_1_Concat__733 [Concat] inputs: [zero_const__739 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3/stack_1_Concat__733 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3/stack_1_Concat__733 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3/stack_1_Concat__733:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3/stack_1_Concat__733:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3/stack_1_Concat__733 [Concat] outputs: [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3/stack_1_Concat__733:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Cast__742 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3/stack_1_Concat__733:0 [04/08/2022-14:45:39] [V] [TRT] Cast__742 [Cast] inputs: [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3/stack_1_Concat__733:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Cast__742 for ONNX node: Cast__742 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Cast__742:0 for ONNX tensor: Cast__742:0 [04/08/2022-14:45:39] [V] [TRT] Cast__742 [Cast] outputs: [Cast__742:0 -> (2)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Max__744 [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Cast__742:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3918 [04/08/2022-14:45:39] [V] [TRT] Max__744 [Max] inputs: [Cast__742:0 -> (2)[FLOAT]], [const_fold_opt__3918 -> (2)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: const_fold_opt__3918 for ONNX node: const_fold_opt__3918 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Max__744 for ONNX node: Max__744 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Max__744:0 for ONNX tensor: Max__744:0 [04/08/2022-14:45:39] [V] [TRT] Max__744 [Max] outputs: [Max__744:0 -> (2)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Max__744__745 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Max__744:0 [04/08/2022-14:45:39] [V] [TRT] Max__744__745 [Cast] inputs: [Max__744:0 -> (2)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Max__744__745 for ONNX node: Max__744__745 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Max__744__745:0 for ONNX tensor: Max__744__745:0 [04/08/2022-14:45:39] [V] [TRT] Max__744__745 [Cast] outputs: [Max__744__745:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Equal__748 [Equal] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Max__744__745:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: max_const__737 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Equal__748 [Equal] inputs: [Max__744__745:0 -> (2)[INT32]], [max_const__737 -> ()[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: max_const__737 for ONNX node: max_const__737 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Equal__748 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Equal__748 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Equal__748:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Equal__748:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Equal__748 [Equal] outputs: [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Equal__748:0 -> (2)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_And__749 [And] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Equal__748:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Less__747:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_And__749 [And] inputs: [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Equal__748:0 -> (2)[BOOL]], [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Less__747:0 -> (2)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_And__749 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_And__749 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_And__749:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_And__749:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_And__749 [And] outputs: [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_And__749:0 -> (2)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Where__750 [Where] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_And__749:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: min_const__738 [04/08/2022-14:45:39] [V] [TRT] Searching for input: Max__744__745:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Where__750 [Where] inputs: [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_And__749:0 -> (2)[BOOL]], [min_const__738 -> ()[INT32]], [Max__744__745:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: min_const__738 for ONNX node: min_const__738 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Where__750 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Where__750 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Where__750:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Where__750:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Where__750 [Where] outputs: [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Where__750:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3935 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3396 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Where__750:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8/stack_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3 [Slice] inputs: [const_fold_opt__3935 -> (1, 512, 128)[FLOAT]], [begin_masked__3396 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3_Where__750:0 -> (2)[INT32]], [begin_masked__3342 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8/stack_2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: const_fold_opt__3935 for ONNX node: const_fold_opt__3935 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3:0 -> (1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Position/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token-Segment/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/add [Add] inputs: [StatefulPartitionedCall/model_2/Embedding-Token-Segment/add:0 -> (-1, -1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Position/strided_slice_3:0 -> (1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Position/add for ONNX node: StatefulPartitionedCall/model_2/Embedding-Position/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Position/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Position/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Position/add [Add] outputs: [StatefulPartitionedCall/model_2/Embedding-Position/add:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Norm/Mean [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Position/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Embedding-Position/add:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/Mean for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/Mean [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Norm/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Norm/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Embedding-Norm/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Norm/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Position/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Embedding-Position/add:0 -> (-1, -1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Norm/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/sub for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Norm/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Embedding-Norm/sub:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Norm/Square [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Embedding-Norm/sub:0 -> (-1, -1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Norm/sub:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/Square for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/Square [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Norm/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Norm/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Embedding-Norm/Square:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Norm/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Embedding-Norm/Square:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/Mean_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Norm/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Norm/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Embedding-Norm/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Norm/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/add [Add] inputs: [StatefulPartitionedCall/model_2/Embedding-Norm/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/add for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Norm/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Norm/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/add [Add] outputs: [StatefulPartitionedCall/model_2/Embedding-Norm/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value/Minimum [Min] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Embedding-Norm/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value/Minimum [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Norm/Sqrt [Sqrt] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Embedding-Norm/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/Sqrt [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Norm/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Norm/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Embedding-Norm/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Norm/truediv [Div] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Embedding-Norm/sub:0 -> (-1, -1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Norm/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/truediv for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Norm/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Norm/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Embedding-Norm/truediv:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Norm/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Embedding-Norm/truediv:0 -> (-1, -1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Norm/mul/ReadVariableOp:0 -> (128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/mul/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/mul for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Norm/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Norm/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Embedding-Norm/mul:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Norm/add_1 [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Embedding-Norm/mul:0 -> (-1, -1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Norm/add_1/ReadVariableOp:0 -> (128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/add_1/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Norm/add_1 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Norm/add_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Norm/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Norm/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Norm/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Embedding-Norm/add_1:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Embedding-Norm/add_1:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape__752 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape__752 [Cast] inputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape__752 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape__752 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape__752:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape__752:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape__752 [Cast] outputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape__752:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape__752:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Shape__752:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: begin_masked__3342 for ONNX node: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot__759 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot__759 [Cast] inputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot__759 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot__759 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot__759:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot__759:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot__759 [Cast] outputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot__759:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Norm/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape_shape__4182 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Embedding-Norm/add_1:0 -> (-1, -1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape_shape__4182 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape:0 -> (-1, 128)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape:0 -> (-1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape_1:0 -> (128, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape_1:0 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot__759:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot__759:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1/shape_Concat__820 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1/shape_Concat__820 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: const_fold_opt__3765 for ONNX node: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Registering layer: const_fold_opt__3668 for ONNX node: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Registering layer: const_fold_opt__3897 for ONNX node: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1/shape_Concat__820 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1/shape_Concat__820 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1/shape_Concat__820:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1/shape_Concat__820:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1/shape_Concat__820 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1/shape_Concat__820:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1/shape_Concat__820:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1/shape_Concat__820:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_1__806:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1__800 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1__800 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1__800 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1__800 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1__800:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1__800:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1__800 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1__800:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1__800:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape_1__800:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Transpose__3509 [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3509 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Transpose__3509 for ONNX node: Transpose__3509 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Transpose__3509:0 for ONNX tensor: Transpose__3509:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3509 [Transpose] outputs: [Transpose__3509:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Shape__3581 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3581 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Shape__3581 for ONNX node: Shape__3581 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Shape__3581:0 for ONNX tensor: Shape__3581:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3581 [Shape] outputs: [Shape__3581:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Gather__3585 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Shape__3581:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:39] [V] [TRT] Gather__3585 [Gather] inputs: [Shape__3581:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Const__3624 for ONNX node: Const__3624 [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Gather__3585 for ONNX node: Gather__3585 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] Gather__3585 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape_1__822:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1/shape_Concat__841 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1/shape_Concat__841 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1/shape_Concat__841 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1/shape_Concat__841 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1/shape_Concat__841:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1/shape_Concat__841:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1/shape_Concat__841 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1/shape_Concat__841:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1__842 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1/shape_Concat__841:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1__842 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1/shape_Concat__841:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1__842 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1__842 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1__842:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1__842:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1__842 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1__842:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Transpose__3509:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1__842:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1 [Reshape] inputs: [Transpose__3509:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1__842:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot__767:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1__821:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Shape__859:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2/shape_Concat__878 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2/shape_Concat__878 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2/shape_Concat__878 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2/shape_Concat__878 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2/shape_Concat__878:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2/shape_Concat__878:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2/shape_Concat__878 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2/shape_Concat__878:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2__889 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2/shape_Concat__878:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2__889 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2/shape_Concat__878:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2__889 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2__889 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2__889:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2__889:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2__889 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2__889:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2__889:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2__889:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/truediv [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 for ONNX node: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Cast__23 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__20:0 [04/08/2022-14:45:39] [V] [TRT] Cast__23 [Cast] inputs: [Not__20:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Cast__23 for ONNX node: Cast__23 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Cast__23:0 for ONNX tensor: Cast__23:0 [04/08/2022-14:45:39] [V] [TRT] Cast__23 [Cast] outputs: [Cast__23:0 -> (1, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Embedding-Token-Segment/All_ReduceSum__29 [ReduceSum] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Cast__23:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token-Segment/All_ReduceSum__29 [ReduceSum] inputs: [Cast__23:0 -> (1, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Embedding-Token-Segment/All_ReduceSum__29 for ONNX node: StatefulPartitionedCall/model_2/Embedding-Token-Segment/All_ReduceSum__29 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Embedding-Token-Segment/All_ReduceSum__29:0 for ONNX tensor: StatefulPartitionedCall/model_2/Embedding-Token-Segment/All_ReduceSum__29:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Embedding-Token-Segment/All_ReduceSum__29 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Embedding-Token-Segment/All_ReduceSum__29:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Greater__33 [Greater] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Token-Segment/All_ReduceSum__29:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] Greater__33 [Greater] inputs: [StatefulPartitionedCall/model_2/Embedding-Token-Segment/All_ReduceSum__29:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Greater__33 for ONNX node: Greater__33 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Greater__33:0 for ONNX tensor: Greater__33:0 [04/08/2022-14:45:39] [V] [TRT] Greater__33 [Greater] outputs: [Greater__33:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__36 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Greater__33:0 [04/08/2022-14:45:39] [V] [TRT] Not__36 [Not] inputs: [Greater__33:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__36 for ONNX node: Not__36 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__36:0 for ONNX tensor: Not__36:0 [04/08/2022-14:45:39] [V] [TRT] Not__36 [Not] outputs: [Not__36:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Cast [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__36:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Cast [Cast] inputs: [Not__36:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Cast [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Cast:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Cast:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul_1 [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax [Softmax] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1/shape_Concat__904 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1/shape_Concat__904 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1/shape_Concat__904 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1/shape_Concat__904 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1/shape_Concat__904:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1/shape_Concat__904:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1/shape_Concat__904 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1/shape_Concat__904:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1__905 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1/shape_Concat__904:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1__905 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1/shape_Concat__904:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1__905 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1__905 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1__905:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1__905:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1__905 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1__905:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1__905:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1__905:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Shape__890:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2/shape_Concat__924 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2/shape_Concat__924 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2/shape_Concat__924 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2/shape_Concat__924 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2/shape_Concat__924:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2/shape_Concat__924:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2/shape_Concat__924 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2/shape_Concat__924:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2__935 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2/shape_Concat__924:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2__935 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2/shape_Concat__924:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2__935 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2__935 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2__935:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2__935:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2__935 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2__935:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2__935:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2__935:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3__936 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3__936 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3__936 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3__936 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3__936:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3__936:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3__936 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3__936:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3__936:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Shape_3__936:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3/shape_Concat__948 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3/shape_Concat__948 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3/shape_Concat__948 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3/shape_Concat__948 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3/shape_Concat__948:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3/shape_Concat__948:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3/shape_Concat__948 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3/shape_Concat__948:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3__949 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3/shape_Concat__948:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3__949 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3/shape_Concat__948:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3__949 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3__949 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3__949:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3__949:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3__949 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3__949:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3__949:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3__949:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape__950 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape__950 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape__950 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape__950 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape__950:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape__950:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape__950 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape__950:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape__950:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Shape__950:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot__957 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot__957 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot__957 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot__957 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot__957:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot__957:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot__957 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot__957:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot__957:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot__957:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add [Add] inputs: [StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Square [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Square [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value/Minimum [Min] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value/Minimum [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Sqrt [Sqrt] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Sqrt [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/truediv [Div] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1 [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape__958 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape__958 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape__958 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape__958 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape__958:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape__958:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape__958 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape__958:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape__958:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Shape__958:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot__965 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot__965 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot__965 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot__965 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot__965:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot__965:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot__965 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot__965:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot__965:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot__965:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu [Relu] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape__966 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape__966 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape__966 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape__966 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape__966:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape__966:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape__966 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape__966:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape__966:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Shape__966:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot__973 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot__973 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot__973 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot__973 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot__973:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot__973:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot__973 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot__973:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot__973:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot__973:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Square [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Square [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value/Minimum [Min] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value/Minimum [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Sqrt [Sqrt] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Sqrt [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/truediv [Div] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1 [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_2 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_2 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_2 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_2:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_2:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2/shape_Concat__1012 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2/shape_Concat__1012 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2/shape_Concat__1012 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2/shape_Concat__1012 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2/shape_Concat__1012:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2/shape_Concat__1012:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2/shape_Concat__1012 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2/shape_Concat__1012:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2/shape_Concat__1012:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2/shape_Concat__1012:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/Shape__982:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1__1014 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1__1014 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1__1014 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1__1014 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1__1014:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1__1014:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1__1014 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1__1014:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1__1014:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape_1__1014:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Transpose__3515 [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3515 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Transpose__3515 for ONNX node: Transpose__3515 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Transpose__3515:0 for ONNX tensor: Transpose__3515:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3515 [Transpose] outputs: [Transpose__3515:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Shape__3586 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3586 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Shape__3586 for ONNX node: Shape__3586 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Shape__3586:0 for ONNX tensor: Shape__3586:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3586 [Shape] outputs: [Shape__3586:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Gather__3590 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Shape__3586:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:39] [V] [TRT] Gather__3590 [Gather] inputs: [Shape__3586:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Gather__3590 for ONNX node: Gather__3590 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] Gather__3590 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape_1__1036:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1/shape_Concat__1055 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1/shape_Concat__1055 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1/shape_Concat__1055 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1/shape_Concat__1055 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1/shape_Concat__1055:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1/shape_Concat__1055:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1/shape_Concat__1055 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1/shape_Concat__1055:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1__1056 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1/shape_Concat__1055:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1__1056 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1/shape_Concat__1055:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1__1056 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1__1056 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1__1056:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1__1056:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1__1056 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1__1056:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Transpose__3515:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1__1056:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1 [Reshape] inputs: [Transpose__3515:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1__1056:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot__989:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2__1013:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Shape__1073:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2/shape_Concat__1092 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2/shape_Concat__1092 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2/shape_Concat__1092 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2/shape_Concat__1092 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2/shape_Concat__1092:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2/shape_Concat__1092:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2/shape_Concat__1092 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2/shape_Concat__1092:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2__1103 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2/shape_Concat__1092:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2__1103 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2/shape_Concat__1092:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2__1103 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2__1103 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2__1103:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2__1103:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2__1103 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2__1103:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2__1103:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2__1103:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/truediv [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__36:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1 [Unsqueeze] inputs: [Not__36:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/concat [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/concat [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/concat:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__51 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/concat:0 [04/08/2022-14:45:39] [V] [TRT] Not__51 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__51 for ONNX node: Not__51 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__51:0 for ONNX tensor: Not__51:0 [04/08/2022-14:45:39] [V] [TRT] Not__51 [Not] outputs: [Not__51:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Cast__54 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__51:0 [04/08/2022-14:45:39] [V] [TRT] Cast__54 [Cast] inputs: [Not__51:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Cast__54 for ONNX node: Cast__54 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Cast__54:0 for ONNX tensor: Cast__54:0 [04/08/2022-14:45:39] [V] [TRT] Cast__54 [Cast] outputs: [Cast__54:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/All_ReduceSum__60 [ReduceSum] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Cast__54:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/All_ReduceSum__60 [ReduceSum] inputs: [Cast__54:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/All_ReduceSum__60 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/All_ReduceSum__60 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/All_ReduceSum__60:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/All_ReduceSum__60:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/All_ReduceSum__60 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/All_ReduceSum__60:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Greater__64 [Greater] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/All_ReduceSum__60:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] Greater__64 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add/All_ReduceSum__60:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Greater__64 for ONNX node: Greater__64 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Greater__64:0 for ONNX tensor: Greater__64:0 [04/08/2022-14:45:39] [V] [TRT] Greater__64 [Greater] outputs: [Greater__64:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__67 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Greater__64:0 [04/08/2022-14:45:39] [V] [TRT] Not__67 [Not] inputs: [Greater__64:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__67 for ONNX node: Not__67 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__67:0 for ONNX tensor: Not__67:0 [04/08/2022-14:45:39] [V] [TRT] Not__67 [Not] outputs: [Not__67:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__67:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims [Unsqueeze] inputs: [Not__67:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/concat [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/concat [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/concat:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__76 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/concat:0 [04/08/2022-14:45:39] [V] [TRT] Not__76 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__76 for ONNX node: Not__76 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__76:0 for ONNX tensor: Not__76:0 [04/08/2022-14:45:39] [V] [TRT] Not__76 [Not] outputs: [Not__76:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Cast__79 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__76:0 [04/08/2022-14:45:39] [V] [TRT] Cast__79 [Cast] inputs: [Not__76:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Cast__79 for ONNX node: Cast__79 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Cast__79:0 for ONNX tensor: Cast__79:0 [04/08/2022-14:45:39] [V] [TRT] Cast__79 [Cast] outputs: [Cast__79:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/All_ReduceSum__85 [ReduceSum] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Cast__79:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/All_ReduceSum__85 [ReduceSum] inputs: [Cast__79:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/All_ReduceSum__85 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/All_ReduceSum__85 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/All_ReduceSum__85:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/All_ReduceSum__85:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/All_ReduceSum__85 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/All_ReduceSum__85:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Greater__89 [Greater] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/All_ReduceSum__85:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] Greater__89 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add/All_ReduceSum__85:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Greater__89 for ONNX node: Greater__89 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Greater__89:0 for ONNX tensor: Greater__89:0 [04/08/2022-14:45:39] [V] [TRT] Greater__89 [Greater] outputs: [Greater__89:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__92 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Greater__89:0 [04/08/2022-14:45:39] [V] [TRT] Not__92 [Not] inputs: [Greater__89:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__92 for ONNX node: Not__92 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__92:0 for ONNX tensor: Not__92:0 [04/08/2022-14:45:39] [V] [TRT] Not__92 [Not] outputs: [Not__92:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Cast [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__92:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Cast [Cast] inputs: [Not__92:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Cast [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Cast:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Cast:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul_1 [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax [Softmax] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1/shape_Concat__1118 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1/shape_Concat__1118 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1/shape_Concat__1118 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1/shape_Concat__1118 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1/shape_Concat__1118:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1/shape_Concat__1118:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1/shape_Concat__1118 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1/shape_Concat__1118:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1__1119 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1/shape_Concat__1118:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1__1119 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1/shape_Concat__1118:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1__1119 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1__1119 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1__1119:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1__1119:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1__1119 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1__1119:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1__1119:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1__1119:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Shape__1104:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2/shape_Concat__1138 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2/shape_Concat__1138 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2/shape_Concat__1138 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2/shape_Concat__1138 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2/shape_Concat__1138:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2/shape_Concat__1138:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2/shape_Concat__1138 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2/shape_Concat__1138:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2__1149 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2/shape_Concat__1138:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2__1149 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2/shape_Concat__1138:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2__1149 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2__1149 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2__1149:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2__1149:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2__1149 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2__1149:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2__1149:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2__1149:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3__1150 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3__1150 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3__1150 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3__1150 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3__1150:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3__1150:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3__1150 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3__1150:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3__1150:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Shape_3__1150:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3/shape_Concat__1162 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3/shape_Concat__1162 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3/shape_Concat__1162 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3/shape_Concat__1162 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3/shape_Concat__1162:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3/shape_Concat__1162:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3/shape_Concat__1162 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3/shape_Concat__1162:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3__1163 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3/shape_Concat__1162:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3__1163 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3/shape_Concat__1162:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3__1163 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3__1163 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3__1163:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3__1163:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3__1163 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3__1163:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3__1163:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3__1163:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape__1164 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape__1164 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape__1164 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape__1164 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape__1164:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape__1164:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape__1164 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape__1164:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape__1164:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Shape__1164:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot__1171 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot__1171 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot__1171 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot__1171 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot__1171:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot__1171:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot__1171 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot__1171:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot__1171:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot__1171:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Square [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Square [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value/Minimum [Min] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value/Minimum [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Sqrt [Sqrt] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Sqrt [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/truediv [Div] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1 [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape__1172 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape__1172 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape__1172 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape__1172 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape__1172:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape__1172:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape__1172 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape__1172:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape__1172:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Shape__1172:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot__1179 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot__1179 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot__1179 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot__1179 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot__1179:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot__1179:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot__1179 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot__1179:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot__1179:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot__1179:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu [Relu] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape__1180 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape__1180 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape__1180 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape__1180 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape__1180:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape__1180:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape__1180 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape__1180:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape__1180:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Shape__1180:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot__1187 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot__1187 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot__1187 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot__1187 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot__1187:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot__1187:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot__1187 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot__1187:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot__1187:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot__1187:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Square [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Square [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value/Minimum [Min] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value/Minimum [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Sqrt [Sqrt] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Sqrt [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/truediv [Div] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1 [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1/shape_Concat__1248 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1/shape_Concat__1248 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1/shape_Concat__1248 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1/shape_Concat__1248 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1/shape_Concat__1248:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1/shape_Concat__1248:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1/shape_Concat__1248 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1/shape_Concat__1248:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1/shape_Concat__1248:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1/shape_Concat__1248:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_1__1234:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1__1228 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1__1228 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1__1228 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1__1228 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1__1228:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1__1228:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1__1228 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1__1228:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1__1228:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape_1__1228:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Transpose__3524 [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3524 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Transpose__3524 for ONNX node: Transpose__3524 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Transpose__3524:0 for ONNX tensor: Transpose__3524:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3524 [Transpose] outputs: [Transpose__3524:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Shape__3591 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3591 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Shape__3591 for ONNX node: Shape__3591 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Shape__3591:0 for ONNX tensor: Shape__3591:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3591 [Shape] outputs: [Shape__3591:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Gather__3595 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Shape__3591:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:39] [V] [TRT] Gather__3595 [Gather] inputs: [Shape__3591:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Gather__3595 for ONNX node: Gather__3595 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] Gather__3595 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape_1__1250:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1/shape_Concat__1269 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1/shape_Concat__1269 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1/shape_Concat__1269 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1/shape_Concat__1269 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1/shape_Concat__1269:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1/shape_Concat__1269:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1/shape_Concat__1269 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1/shape_Concat__1269:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1__1270 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1/shape_Concat__1269:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1__1270 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1/shape_Concat__1269:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1__1270 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1__1270 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1__1270:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1__1270:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1__1270 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1__1270:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Transpose__3524:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1__1270:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1 [Reshape] inputs: [Transpose__3524:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1__1270:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot__1203:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1__1249:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Shape__1287:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2/shape_Concat__1306 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2/shape_Concat__1306 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2/shape_Concat__1306 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2/shape_Concat__1306 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2/shape_Concat__1306:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2/shape_Concat__1306:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2/shape_Concat__1306 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2/shape_Concat__1306:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2__1317 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2/shape_Concat__1306:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2__1317 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2/shape_Concat__1306:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2__1317 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2__1317 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2__1317:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2__1317:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2__1317 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2__1317:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2__1317:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2__1317:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/truediv [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__92:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims [Unsqueeze] inputs: [Not__92:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/concat [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/concat [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/concat:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__107 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/concat:0 [04/08/2022-14:45:39] [V] [TRT] Not__107 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__107 for ONNX node: Not__107 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__107:0 for ONNX tensor: Not__107:0 [04/08/2022-14:45:39] [V] [TRT] Not__107 [Not] outputs: [Not__107:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Cast__110 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__107:0 [04/08/2022-14:45:39] [V] [TRT] Cast__110 [Cast] inputs: [Not__107:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Cast__110 for ONNX node: Cast__110 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Cast__110:0 for ONNX tensor: Cast__110:0 [04/08/2022-14:45:39] [V] [TRT] Cast__110 [Cast] outputs: [Cast__110:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/All_ReduceSum__116 [ReduceSum] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Cast__110:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/All_ReduceSum__116 [ReduceSum] inputs: [Cast__110:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/All_ReduceSum__116 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/All_ReduceSum__116 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/All_ReduceSum__116:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/All_ReduceSum__116:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/All_ReduceSum__116 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/All_ReduceSum__116:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Greater__120 [Greater] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/All_ReduceSum__116:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] Greater__120 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_1/All_ReduceSum__116:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Greater__120 for ONNX node: Greater__120 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Greater__120:0 for ONNX tensor: Greater__120:0 [04/08/2022-14:45:39] [V] [TRT] Greater__120 [Greater] outputs: [Greater__120:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__123 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Greater__120:0 [04/08/2022-14:45:39] [V] [TRT] Not__123 [Not] inputs: [Greater__120:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__123 for ONNX node: Not__123 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__123:0 for ONNX tensor: Not__123:0 [04/08/2022-14:45:39] [V] [TRT] Not__123 [Not] outputs: [Not__123:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__123:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims [Unsqueeze] inputs: [Not__123:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/concat [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/concat [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/concat:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__132 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/concat:0 [04/08/2022-14:45:39] [V] [TRT] Not__132 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__132 for ONNX node: Not__132 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__132:0 for ONNX tensor: Not__132:0 [04/08/2022-14:45:39] [V] [TRT] Not__132 [Not] outputs: [Not__132:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Cast__135 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__132:0 [04/08/2022-14:45:39] [V] [TRT] Cast__135 [Cast] inputs: [Not__132:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Cast__135 for ONNX node: Cast__135 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Cast__135:0 for ONNX tensor: Cast__135:0 [04/08/2022-14:45:39] [V] [TRT] Cast__135 [Cast] outputs: [Cast__135:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/All_ReduceSum__141 [ReduceSum] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Cast__135:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/All_ReduceSum__141 [ReduceSum] inputs: [Cast__135:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/All_ReduceSum__141 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/All_ReduceSum__141 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/All_ReduceSum__141:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/All_ReduceSum__141:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/All_ReduceSum__141 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/All_ReduceSum__141:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Greater__145 [Greater] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/All_ReduceSum__141:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] Greater__145 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_1/All_ReduceSum__141:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Greater__145 for ONNX node: Greater__145 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Greater__145:0 for ONNX tensor: Greater__145:0 [04/08/2022-14:45:39] [V] [TRT] Greater__145 [Greater] outputs: [Greater__145:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__148 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Greater__145:0 [04/08/2022-14:45:39] [V] [TRT] Not__148 [Not] inputs: [Greater__145:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__148 for ONNX node: Not__148 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__148:0 for ONNX tensor: Not__148:0 [04/08/2022-14:45:39] [V] [TRT] Not__148 [Not] outputs: [Not__148:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Cast [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__148:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Cast [Cast] inputs: [Not__148:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Cast [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Cast:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Cast:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul_1 [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax [Softmax] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1/shape_Concat__1332 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1/shape_Concat__1332 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1/shape_Concat__1332 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1/shape_Concat__1332 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1/shape_Concat__1332:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1/shape_Concat__1332:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1/shape_Concat__1332 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1/shape_Concat__1332:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1__1333 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1/shape_Concat__1332:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1__1333 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1/shape_Concat__1332:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1__1333 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1__1333 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1__1333:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1__1333:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1__1333 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1__1333:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1__1333:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1__1333:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Shape__1318:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2/shape_Concat__1352 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2/shape_Concat__1352 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2/shape_Concat__1352 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2/shape_Concat__1352 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2/shape_Concat__1352:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2/shape_Concat__1352:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2/shape_Concat__1352 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2/shape_Concat__1352:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2__1363 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2/shape_Concat__1352:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2__1363 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2/shape_Concat__1352:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2__1363 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2__1363 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2__1363:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2__1363:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2__1363 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2__1363:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2__1363:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2__1363:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3__1364 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3__1364 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3__1364 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3__1364 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3__1364:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3__1364:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3__1364 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3__1364:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3__1364:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Shape_3__1364:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3/shape_Concat__1376 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3/shape_Concat__1376 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3/shape_Concat__1376 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3/shape_Concat__1376 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3/shape_Concat__1376:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3/shape_Concat__1376:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3/shape_Concat__1376 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3/shape_Concat__1376:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3__1377 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3/shape_Concat__1376:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3__1377 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3/shape_Concat__1376:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3__1377 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3__1377 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3__1377:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3__1377:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3__1377 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3__1377:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3__1377:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3__1377:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape__1378 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape__1378 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape__1378 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape__1378 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape__1378:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape__1378:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape__1378 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape__1378:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape__1378:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Shape__1378:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot__1385 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot__1385 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot__1385 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot__1385 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot__1385:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot__1385:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot__1385 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot__1385:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot__1385:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot__1385:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_1/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Square [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Square [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value/Minimum [Min] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value/Minimum [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Sqrt [Sqrt] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Sqrt [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/truediv [Div] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1 [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape__1386 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape__1386 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape__1386 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape__1386 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape__1386:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape__1386:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape__1386 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape__1386:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape__1386:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Shape__1386:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot__1393 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot__1393 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot__1393 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot__1393 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot__1393:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot__1393:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot__1393 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot__1393:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot__1393:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot__1393:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu [Relu] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape__1394 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape__1394 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape__1394 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape__1394 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape__1394:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape__1394:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape__1394 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape__1394:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape__1394:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Shape__1394:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot__1401 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot__1401 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot__1401 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot__1401 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot__1401:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot__1401:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot__1401 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot__1401:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot__1401:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot__1401:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_2/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_2/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Square [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Square [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value/Minimum [Min] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value/Minimum [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Sqrt [Sqrt] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Sqrt [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/truediv [Div] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1 [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape/shape_Concat__1499 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape/shape_Concat__1499 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape/shape_Concat__1499 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape/shape_Concat__1499 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape/shape_Concat__1499:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape/shape_Concat__1499:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape/shape_Concat__1499 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape/shape_Concat__1499:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape/shape_Concat__1499:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape/shape_Concat__1499:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/Shape__1402:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1__1442 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1__1442 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1__1442 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1__1442 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1__1442:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1__1442:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1__1442 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1__1442:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1__1442:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape_1__1442:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Transpose__3527 [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3527 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Transpose__3527 for ONNX node: Transpose__3527 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Transpose__3527:0 for ONNX tensor: Transpose__3527:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3527 [Transpose] outputs: [Transpose__3527:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Shape__3596 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3596 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Shape__3596 for ONNX node: Shape__3596 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Shape__3596:0 for ONNX tensor: Shape__3596:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3596 [Shape] outputs: [Shape__3596:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Gather__3600 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Shape__3596:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:39] [V] [TRT] Gather__3600 [Gather] inputs: [Shape__3596:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Gather__3600 for ONNX node: Gather__3600 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] Gather__3600 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape_1__1464:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1/shape_Concat__1483 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1/shape_Concat__1483 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1/shape_Concat__1483 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1/shape_Concat__1483 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1/shape_Concat__1483:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1/shape_Concat__1483:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1/shape_Concat__1483 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1/shape_Concat__1483:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1__1484 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1/shape_Concat__1483:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1__1484 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1/shape_Concat__1483:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1__1484 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1__1484 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1__1484:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1__1484:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1__1484 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1__1484:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Transpose__3527:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1__1484:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1 [Reshape] inputs: [Transpose__3527:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1__1484:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot__1425:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1__1463:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Shape__1501:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2/shape_Concat__1520 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2/shape_Concat__1520 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2/shape_Concat__1520 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2/shape_Concat__1520 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2/shape_Concat__1520:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2/shape_Concat__1520:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2/shape_Concat__1520 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2/shape_Concat__1520:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2__1531 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2/shape_Concat__1520:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2__1531 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2/shape_Concat__1520:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2__1531 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2__1531 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2__1531:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2__1531:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2__1531 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2__1531:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2__1531:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2__1531:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/truediv [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__148:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims [Unsqueeze] inputs: [Not__148:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/concat [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/concat [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/concat:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__163 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/concat:0 [04/08/2022-14:45:39] [V] [TRT] Not__163 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__163 for ONNX node: Not__163 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__163:0 for ONNX tensor: Not__163:0 [04/08/2022-14:45:39] [V] [TRT] Not__163 [Not] outputs: [Not__163:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Cast__166 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__163:0 [04/08/2022-14:45:39] [V] [TRT] Cast__166 [Cast] inputs: [Not__163:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Cast__166 for ONNX node: Cast__166 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Cast__166:0 for ONNX tensor: Cast__166:0 [04/08/2022-14:45:39] [V] [TRT] Cast__166 [Cast] outputs: [Cast__166:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/All_ReduceSum__172 [ReduceSum] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Cast__166:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/All_ReduceSum__172 [ReduceSum] inputs: [Cast__166:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/All_ReduceSum__172 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/All_ReduceSum__172 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/All_ReduceSum__172:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/All_ReduceSum__172:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/All_ReduceSum__172 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/All_ReduceSum__172:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Greater__176 [Greater] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/All_ReduceSum__172:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] Greater__176 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_2/All_ReduceSum__172:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Greater__176 for ONNX node: Greater__176 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Greater__176:0 for ONNX tensor: Greater__176:0 [04/08/2022-14:45:39] [V] [TRT] Greater__176 [Greater] outputs: [Greater__176:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__179 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Greater__176:0 [04/08/2022-14:45:39] [V] [TRT] Not__179 [Not] inputs: [Greater__176:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__179 for ONNX node: Not__179 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__179:0 for ONNX tensor: Not__179:0 [04/08/2022-14:45:39] [V] [TRT] Not__179 [Not] outputs: [Not__179:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__179:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims [Unsqueeze] inputs: [Not__179:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/concat [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/concat [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/concat:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__188 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/concat:0 [04/08/2022-14:45:39] [V] [TRT] Not__188 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__188 for ONNX node: Not__188 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__188:0 for ONNX tensor: Not__188:0 [04/08/2022-14:45:39] [V] [TRT] Not__188 [Not] outputs: [Not__188:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Cast__191 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__188:0 [04/08/2022-14:45:39] [V] [TRT] Cast__191 [Cast] inputs: [Not__188:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Cast__191 for ONNX node: Cast__191 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Cast__191:0 for ONNX tensor: Cast__191:0 [04/08/2022-14:45:39] [V] [TRT] Cast__191 [Cast] outputs: [Cast__191:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/All_ReduceSum__197 [ReduceSum] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Cast__191:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/All_ReduceSum__197 [ReduceSum] inputs: [Cast__191:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/All_ReduceSum__197 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/All_ReduceSum__197 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/All_ReduceSum__197:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/All_ReduceSum__197:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/All_ReduceSum__197 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/All_ReduceSum__197:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Greater__201 [Greater] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/All_ReduceSum__197:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] Greater__201 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_2/All_ReduceSum__197:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Greater__201 for ONNX node: Greater__201 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Greater__201:0 for ONNX tensor: Greater__201:0 [04/08/2022-14:45:39] [V] [TRT] Greater__201 [Greater] outputs: [Greater__201:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__204 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Greater__201:0 [04/08/2022-14:45:39] [V] [TRT] Not__204 [Not] inputs: [Greater__201:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__204 for ONNX node: Not__204 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__204:0 for ONNX tensor: Not__204:0 [04/08/2022-14:45:39] [V] [TRT] Not__204 [Not] outputs: [Not__204:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Cast [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__204:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Cast [Cast] inputs: [Not__204:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Cast [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Cast:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Cast:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul_1 [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax [Softmax] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1/shape_Concat__1546 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1/shape_Concat__1546 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1/shape_Concat__1546 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1/shape_Concat__1546 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1/shape_Concat__1546:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1/shape_Concat__1546:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1/shape_Concat__1546 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1/shape_Concat__1546:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1__1547 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1/shape_Concat__1546:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1__1547 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1/shape_Concat__1546:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1__1547 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1__1547 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1__1547:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1__1547:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1__1547 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1__1547:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1__1547:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1__1547:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Shape__1532:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2/shape_Concat__1566 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2/shape_Concat__1566 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2/shape_Concat__1566 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2/shape_Concat__1566 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2/shape_Concat__1566:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2/shape_Concat__1566:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2/shape_Concat__1566 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2/shape_Concat__1566:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2__1577 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2/shape_Concat__1566:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2__1577 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2/shape_Concat__1566:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2__1577 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2__1577 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2__1577:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2__1577:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2__1577 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2__1577:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2__1577:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2__1577:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3__1578 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3__1578 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3__1578 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3__1578 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3__1578:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3__1578:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3__1578 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3__1578:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3__1578:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Shape_3__1578:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3/shape_Concat__1590 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3/shape_Concat__1590 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3/shape_Concat__1590 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3/shape_Concat__1590 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3/shape_Concat__1590:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3/shape_Concat__1590:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3/shape_Concat__1590 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3/shape_Concat__1590:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3__1591 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3/shape_Concat__1590:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3__1591 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3/shape_Concat__1590:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3__1591 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3__1591 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3__1591:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3__1591:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3__1591 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3__1591:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3__1591:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3__1591:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape__1592 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape__1592 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape__1592 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape__1592 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape__1592:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape__1592:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape__1592 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape__1592:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape__1592:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Shape__1592:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot__1599 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot__1599 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot__1599 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot__1599 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot__1599:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot__1599:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot__1599 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot__1599:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot__1599:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot__1599:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_2/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Square [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Square [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value/Minimum [Min] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value/Minimum [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Sqrt [Sqrt] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Sqrt [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/truediv [Div] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1 [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape__1600 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape__1600 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape__1600 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape__1600 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape__1600:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape__1600:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape__1600 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape__1600:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape__1600:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Shape__1600:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot__1607 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot__1607 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot__1607 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot__1607 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot__1607:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot__1607:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot__1607 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot__1607:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot__1607:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot__1607:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu [Relu] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape__1608 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape__1608 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape__1608 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape__1608 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape__1608:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape__1608:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape__1608 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape__1608:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape__1608:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Shape__1608:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot__1615 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot__1615 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot__1615 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot__1615 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot__1615:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot__1615:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot__1615 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot__1615:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot__1615:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot__1615:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_3/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_3/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Square [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Square [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value/Minimum [Min] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value/Minimum [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Sqrt [Sqrt] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Sqrt [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/truediv [Div] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1 [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape/shape_Concat__1713 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape/shape_Concat__1713 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape/shape_Concat__1713 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape/shape_Concat__1713 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape/shape_Concat__1713:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape/shape_Concat__1713:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape/shape_Concat__1713 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape/shape_Concat__1713:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape/shape_Concat__1713:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape/shape_Concat__1713:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Shape__1616:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1__1656 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1__1656 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1__1656 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1__1656 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1__1656:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1__1656:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1__1656 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1__1656:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1__1656:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape_1__1656:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Transpose__3533 [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3533 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Transpose__3533 for ONNX node: Transpose__3533 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Transpose__3533:0 for ONNX tensor: Transpose__3533:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3533 [Transpose] outputs: [Transpose__3533:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Shape__3601 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3601 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Shape__3601 for ONNX node: Shape__3601 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Shape__3601:0 for ONNX tensor: Shape__3601:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3601 [Shape] outputs: [Shape__3601:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Gather__3605 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Shape__3601:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:39] [V] [TRT] Gather__3605 [Gather] inputs: [Shape__3601:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Gather__3605 for ONNX node: Gather__3605 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] Gather__3605 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape_1__1678:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1/shape_Concat__1697 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1/shape_Concat__1697 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1/shape_Concat__1697 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1/shape_Concat__1697 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1/shape_Concat__1697:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1/shape_Concat__1697:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1/shape_Concat__1697 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1/shape_Concat__1697:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1__1698 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1/shape_Concat__1697:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1__1698 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1/shape_Concat__1697:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1__1698 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1__1698 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1__1698:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1__1698:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1__1698 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1__1698:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Transpose__3533:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1__1698:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1 [Reshape] inputs: [Transpose__3533:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1__1698:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot__1639:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape__1714:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Shape__1715:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2/shape_Concat__1734 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2/shape_Concat__1734 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2/shape_Concat__1734 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2/shape_Concat__1734 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2/shape_Concat__1734:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2/shape_Concat__1734:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2/shape_Concat__1734 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2/shape_Concat__1734:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2__1745 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2/shape_Concat__1734:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2__1745 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2/shape_Concat__1734:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2__1745 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2__1745 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2__1745:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2__1745:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2__1745 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2__1745:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2__1745:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2__1745:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/truediv [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__204:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims [Unsqueeze] inputs: [Not__204:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/concat [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/concat [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/concat:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__219 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/concat:0 [04/08/2022-14:45:39] [V] [TRT] Not__219 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__219 for ONNX node: Not__219 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__219:0 for ONNX tensor: Not__219:0 [04/08/2022-14:45:39] [V] [TRT] Not__219 [Not] outputs: [Not__219:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Cast__222 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__219:0 [04/08/2022-14:45:39] [V] [TRT] Cast__222 [Cast] inputs: [Not__219:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Cast__222 for ONNX node: Cast__222 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Cast__222:0 for ONNX tensor: Cast__222:0 [04/08/2022-14:45:39] [V] [TRT] Cast__222 [Cast] outputs: [Cast__222:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/All_ReduceSum__228 [ReduceSum] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Cast__222:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/All_ReduceSum__228 [ReduceSum] inputs: [Cast__222:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/All_ReduceSum__228 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/All_ReduceSum__228 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/All_ReduceSum__228:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/All_ReduceSum__228:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/All_ReduceSum__228 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/All_ReduceSum__228:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Greater__232 [Greater] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/All_ReduceSum__228:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] Greater__232 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_3/All_ReduceSum__228:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Greater__232 for ONNX node: Greater__232 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Greater__232:0 for ONNX tensor: Greater__232:0 [04/08/2022-14:45:39] [V] [TRT] Greater__232 [Greater] outputs: [Greater__232:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__235 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Greater__232:0 [04/08/2022-14:45:39] [V] [TRT] Not__235 [Not] inputs: [Greater__232:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__235 for ONNX node: Not__235 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__235:0 for ONNX tensor: Not__235:0 [04/08/2022-14:45:39] [V] [TRT] Not__235 [Not] outputs: [Not__235:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__235:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1 [Unsqueeze] inputs: [Not__235:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/concat [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/concat [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/concat:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__244 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/concat:0 [04/08/2022-14:45:39] [V] [TRT] Not__244 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__244 for ONNX node: Not__244 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__244:0 for ONNX tensor: Not__244:0 [04/08/2022-14:45:39] [V] [TRT] Not__244 [Not] outputs: [Not__244:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Cast__247 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__244:0 [04/08/2022-14:45:39] [V] [TRT] Cast__247 [Cast] inputs: [Not__244:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Cast__247 for ONNX node: Cast__247 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Cast__247:0 for ONNX tensor: Cast__247:0 [04/08/2022-14:45:39] [V] [TRT] Cast__247 [Cast] outputs: [Cast__247:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/All_ReduceSum__253 [ReduceSum] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Cast__247:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/All_ReduceSum__253 [ReduceSum] inputs: [Cast__247:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/All_ReduceSum__253 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/All_ReduceSum__253 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/All_ReduceSum__253:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/All_ReduceSum__253:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/All_ReduceSum__253 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/All_ReduceSum__253:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Greater__257 [Greater] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/All_ReduceSum__253:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] Greater__257 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_3/All_ReduceSum__253:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Greater__257 for ONNX node: Greater__257 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Greater__257:0 for ONNX tensor: Greater__257:0 [04/08/2022-14:45:39] [V] [TRT] Greater__257 [Greater] outputs: [Greater__257:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Not__260 [Not] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Greater__257:0 [04/08/2022-14:45:39] [V] [TRT] Not__260 [Not] inputs: [Greater__257:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Not__260 for ONNX node: Not__260 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Not__260:0 for ONNX tensor: Not__260:0 [04/08/2022-14:45:39] [V] [TRT] Not__260 [Not] outputs: [Not__260:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Cast [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__260:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Cast [Cast] inputs: [Not__260:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Cast [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Cast:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Cast:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul_1 [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax [Softmax] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1/shape_Concat__1760 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1/shape_Concat__1760 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1/shape_Concat__1760 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1/shape_Concat__1760 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1/shape_Concat__1760:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1/shape_Concat__1760:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1/shape_Concat__1760 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1/shape_Concat__1760:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1__1761 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1/shape_Concat__1760:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1__1761 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1/shape_Concat__1760:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1__1761 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1__1761 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1__1761:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1__1761:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1__1761 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1__1761:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1__1761:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1__1761:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Shape__1746:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2/shape_Concat__1780 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2/shape_Concat__1780 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2/shape_Concat__1780 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2/shape_Concat__1780 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2/shape_Concat__1780:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2/shape_Concat__1780:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2/shape_Concat__1780 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2/shape_Concat__1780:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2__1791 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2/shape_Concat__1780:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2__1791 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2/shape_Concat__1780:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2__1791 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2__1791 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2__1791:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2__1791:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2__1791 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2__1791:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2__1791:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2__1791:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3__1792 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3__1792 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3__1792 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3__1792 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3__1792:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3__1792:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3__1792 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3__1792:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3__1792:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Shape_3__1792:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3/shape_Concat__1804 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3/shape_Concat__1804 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3/shape_Concat__1804 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3/shape_Concat__1804 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3/shape_Concat__1804:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3/shape_Concat__1804:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3/shape_Concat__1804 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3/shape_Concat__1804:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3__1805 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3/shape_Concat__1804:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3__1805 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3/shape_Concat__1804:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3__1805 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3__1805 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3__1805:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3__1805:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3__1805 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3__1805:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3__1805:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3__1805:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape__1806 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape__1806 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape__1806 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape__1806 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape__1806:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape__1806:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape__1806 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape__1806:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape__1806:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Shape__1806:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot__1813 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot__1813 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot__1813 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot__1813 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot__1813:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot__1813:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot__1813 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot__1813:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot__1813:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot__1813:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_3/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Square [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Square [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value/Minimum [Min] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value/Minimum [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Sqrt [Sqrt] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Sqrt [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/truediv [Div] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1 [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape__1814 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape__1814 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape__1814 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape__1814 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape__1814:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape__1814:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape__1814 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape__1814:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape__1814:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Shape__1814:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot__1821 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot__1821 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot__1821 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot__1821 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot__1821:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot__1821:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot__1821 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot__1821:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot__1821:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot__1821:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu [Relu] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape__1822 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape__1822 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape__1822 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape__1822 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape__1822:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape__1822:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape__1822 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape__1822:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape__1822:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Shape__1822:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot__1829 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot__1829 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot__1829 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot__1829 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot__1829:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot__1829:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot__1829 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot__1829:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot__1829:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot__1829:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_4/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub [Sub] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Square [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Square [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Square:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value/Minimum [Min] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value/Minimum [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value [Max] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value/Minimum:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Sqrt [Sqrt] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Sqrt [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/truediv [Div] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Sqrt:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/mul [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/truediv:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/mul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/mul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1 [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/mul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1/shape_Concat__1890 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1/shape_Concat__1890 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1/shape_Concat__1890 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1/shape_Concat__1890 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1/shape_Concat__1890:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1/shape_Concat__1890:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1/shape_Concat__1890 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1/shape_Concat__1890:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1/shape_Concat__1890:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1/shape_Concat__1890:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_1__1876:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/GatherV2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/concat_1 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/GatherV2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/concat_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1__1870 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1__1870 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1__1870 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1__1870 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1__1870:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1__1870:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1__1870 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1__1870:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1__1870:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape_1__1870:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Transpose__3539 [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3539 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Transpose__3539 for ONNX node: Transpose__3539 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Transpose__3539:0 for ONNX tensor: Transpose__3539:0 [04/08/2022-14:45:39] [V] [TRT] Transpose__3539 [Transpose] outputs: [Transpose__3539:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Shape__3606 [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3606 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: Shape__3606 for ONNX node: Shape__3606 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: Shape__3606:0 for ONNX tensor: Shape__3606:0 [04/08/2022-14:45:39] [V] [TRT] Shape__3606 [Shape] outputs: [Shape__3606:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: Gather__3610 [Gather] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Shape__3606:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:39] [V] [TRT] Gather__3610 [Gather] inputs: [Shape__3606:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:39] [V] [TRT] Registering layer: Gather__3610 for ONNX node: Gather__3610 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] Gather__3610 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape_1__1892:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1/shape_Concat__1911 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1/shape_Concat__1911 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1/shape_Concat__1911 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1/shape_Concat__1911 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1/shape_Concat__1911:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1/shape_Concat__1911:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1/shape_Concat__1911 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1/shape_Concat__1911:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1__1912 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1/shape_Concat__1911:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1__1912 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1/shape_Concat__1911:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1__1912 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1__1912 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1__1912:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1__1912:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1__1912 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1__1912:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Transpose__3539:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1__1912:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1 [Reshape] inputs: [Transpose__3539:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1__1912:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot__1845:0 -> (3)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/BiasAdd [Add] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/BiasAdd [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/BiasAdd:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1__1891:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose [Transpose] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape [Shape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice [Slice] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Shape__1929:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2/shape_Concat__1948 [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2/shape_Concat__1948 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2/shape_Concat__1948 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2/shape_Concat__1948 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2/shape_Concat__1948:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2/shape_Concat__1948:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2/shape_Concat__1948 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2/shape_Concat__1948:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2__1959 [Cast] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2/shape_Concat__1948:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2__1959 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2/shape_Concat__1948:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2__1959 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2__1959 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2__1959:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2__1959:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2__1959 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2__1959:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/MatMul [MatMul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/MatMul [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/MatMul:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2__1959:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2__1959:0 -> (4)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/truediv [Mul] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/truediv [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/truediv:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:39] [V] [TRT] Searching for input: Not__260:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1 [Unsqueeze] inputs: [Not__260:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:39] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1 [04/08/2022-14:45:39] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/concat [Concat] [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1:0 [04/08/2022-14:45:39] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:39] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/concat [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/concat:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__275 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/concat:0 [04/08/2022-14:45:40] [V] [TRT] Not__275 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__275 for ONNX node: Not__275 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__275:0 for ONNX tensor: Not__275:0 [04/08/2022-14:45:40] [V] [TRT] Not__275 [Not] outputs: [Not__275:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Cast__278 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__275:0 [04/08/2022-14:45:40] [V] [TRT] Cast__278 [Cast] inputs: [Not__275:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Cast__278 for ONNX node: Cast__278 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Cast__278:0 for ONNX tensor: Cast__278:0 [04/08/2022-14:45:40] [V] [TRT] Cast__278 [Cast] outputs: [Cast__278:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/All_ReduceSum__284 [ReduceSum] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Cast__278:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/All_ReduceSum__284 [ReduceSum] inputs: [Cast__278:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/All_ReduceSum__284 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/All_ReduceSum__284 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/All_ReduceSum__284:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/All_ReduceSum__284:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/All_ReduceSum__284 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/All_ReduceSum__284:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Greater__288 [Greater] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/All_ReduceSum__284:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] Greater__288 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_4/All_ReduceSum__284:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Greater__288 for ONNX node: Greater__288 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Greater__288:0 for ONNX tensor: Greater__288:0 [04/08/2022-14:45:40] [V] [TRT] Greater__288 [Greater] outputs: [Greater__288:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__291 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Greater__288:0 [04/08/2022-14:45:40] [V] [TRT] Not__291 [Not] inputs: [Greater__288:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__291 for ONNX node: Not__291 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__291:0 for ONNX tensor: Not__291:0 [04/08/2022-14:45:40] [V] [TRT] Not__291 [Not] outputs: [Not__291:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__291:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1 [Unsqueeze] inputs: [Not__291:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/concat [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/concat [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/concat:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__300 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/concat:0 [04/08/2022-14:45:40] [V] [TRT] Not__300 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__300 for ONNX node: Not__300 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__300:0 for ONNX tensor: Not__300:0 [04/08/2022-14:45:40] [V] [TRT] Not__300 [Not] outputs: [Not__300:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Cast__303 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__300:0 [04/08/2022-14:45:40] [V] [TRT] Cast__303 [Cast] inputs: [Not__300:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Cast__303 for ONNX node: Cast__303 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Cast__303:0 for ONNX tensor: Cast__303:0 [04/08/2022-14:45:40] [V] [TRT] Cast__303 [Cast] outputs: [Cast__303:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/All_ReduceSum__309 [ReduceSum] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Cast__303:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/All_ReduceSum__309 [ReduceSum] inputs: [Cast__303:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/All_ReduceSum__309 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/All_ReduceSum__309 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/All_ReduceSum__309:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/All_ReduceSum__309:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/All_ReduceSum__309 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/All_ReduceSum__309:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Greater__313 [Greater] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/All_ReduceSum__309:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] Greater__313 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_4/All_ReduceSum__309:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Greater__313 for ONNX node: Greater__313 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Greater__313:0 for ONNX tensor: Greater__313:0 [04/08/2022-14:45:40] [V] [TRT] Greater__313 [Greater] outputs: [Greater__313:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__316 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Greater__313:0 [04/08/2022-14:45:40] [V] [TRT] Not__316 [Not] inputs: [Greater__313:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__316 for ONNX node: Not__316 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__316:0 for ONNX tensor: Not__316:0 [04/08/2022-14:45:40] [V] [TRT] Not__316 [Not] outputs: [Not__316:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__316:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast [Cast] inputs: [Not__316:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul_1 [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax [Softmax] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1/shape_Concat__1974 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1/shape_Concat__1974 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1/shape_Concat__1974 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1/shape_Concat__1974 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1/shape_Concat__1974:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1/shape_Concat__1974:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1/shape_Concat__1974 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1/shape_Concat__1974:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1__1975 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1/shape_Concat__1974:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1__1975 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1/shape_Concat__1974:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1__1975 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1__1975 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1__1975:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1__1975:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1__1975 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1__1975:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1__1975:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1__1975:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Shape__1960:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2/shape_Concat__1994 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2/shape_Concat__1994 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2/shape_Concat__1994 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2/shape_Concat__1994 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2/shape_Concat__1994:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2/shape_Concat__1994:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2/shape_Concat__1994 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2/shape_Concat__1994:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2__2005 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2/shape_Concat__1994:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2__2005 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2/shape_Concat__1994:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2__2005 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2__2005 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2__2005:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2__2005:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2__2005 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2__2005:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2__2005:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2__2005:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3__2006 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3__2006 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3__2006 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3__2006 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3__2006:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3__2006:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3__2006 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3__2006:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3__2006:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Shape_3__2006:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3/shape_Concat__2018 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3/shape_Concat__2018 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3/shape_Concat__2018 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3/shape_Concat__2018 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3/shape_Concat__2018:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3/shape_Concat__2018:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3/shape_Concat__2018 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3/shape_Concat__2018:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3__2019 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3/shape_Concat__2018:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3__2019 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3/shape_Concat__2018:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3__2019 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3__2019 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3__2019:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3__2019:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3__2019 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3__2019:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3__2019:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3__2019:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape__2020 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape__2020 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape__2020 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape__2020 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape__2020:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape__2020:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape__2020 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape__2020:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape__2020:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Shape__2020:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot__2027 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot__2027 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot__2027 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot__2027 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot__2027:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot__2027:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot__2027 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot__2027:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot__2027:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot__2027:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_4/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Square [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Square [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value/Minimum [Min] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value/Minimum [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value [Max] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Sqrt [Sqrt] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Sqrt [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/truediv [Div] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1 [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape__2028 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape__2028 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape__2028 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape__2028 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape__2028:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape__2028:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape__2028 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape__2028:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape__2028:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Shape__2028:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot__2035 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot__2035 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot__2035 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot__2035 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot__2035:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot__2035:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot__2035 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot__2035:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot__2035:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot__2035:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu [Relu] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape__2036 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape__2036 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape__2036 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape__2036 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape__2036:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape__2036:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape__2036 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape__2036:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape__2036:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Shape__2036:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot__2043 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot__2043 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot__2043 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot__2043 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot__2043:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot__2043:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot__2043 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot__2043:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot__2043:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot__2043:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_5/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Square [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Square [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value/Minimum [Min] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value/Minimum [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value [Max] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Sqrt [Sqrt] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Sqrt [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/truediv [Div] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1 [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_2 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1/shape_Concat__2104 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1/shape_Concat__2104 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1/shape_Concat__2104 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1/shape_Concat__2104 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1/shape_Concat__2104:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1/shape_Concat__2104:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1/shape_Concat__2104 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1/shape_Concat__2104:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1/shape_Concat__2104:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1/shape_Concat__2104:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_1__2090:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1__2084 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1__2084 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1__2084 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1__2084 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1__2084:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1__2084:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1__2084 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1__2084:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1__2084:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape_1__2084:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Transpose__3545 [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] Transpose__3545 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Transpose__3545 for ONNX node: Transpose__3545 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Transpose__3545:0 for ONNX tensor: Transpose__3545:0 [04/08/2022-14:45:40] [V] [TRT] Transpose__3545 [Transpose] outputs: [Transpose__3545:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Shape__3611 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] Shape__3611 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Shape__3611 for ONNX node: Shape__3611 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Shape__3611:0 for ONNX tensor: Shape__3611:0 [04/08/2022-14:45:40] [V] [TRT] Shape__3611 [Shape] outputs: [Shape__3611:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Gather__3615 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Shape__3611:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:40] [V] [TRT] Gather__3615 [Gather] inputs: [Shape__3611:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Gather__3615 for ONNX node: Gather__3615 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] Gather__3615 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape_1__2106:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1/shape_Concat__2125 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1/shape_Concat__2125 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1/shape_Concat__2125 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1/shape_Concat__2125 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1/shape_Concat__2125:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1/shape_Concat__2125:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1/shape_Concat__2125 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1/shape_Concat__2125:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1__2126 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1/shape_Concat__2125:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1__2126 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1/shape_Concat__2125:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1__2126 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1__2126 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1__2126:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1__2126:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1__2126 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1__2126:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Transpose__3545:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1__2126:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1 [Reshape] inputs: [Transpose__3545:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1__2126:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot__2059:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1__2105:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Shape__2143:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2/shape_Concat__2162 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2/shape_Concat__2162 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2/shape_Concat__2162 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2/shape_Concat__2162 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2/shape_Concat__2162:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2/shape_Concat__2162:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2/shape_Concat__2162 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2/shape_Concat__2162:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2__2173 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2/shape_Concat__2162:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2__2173 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2/shape_Concat__2162:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2__2173 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2__2173 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2__2173:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2__2173:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2__2173 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2__2173:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2__2173:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2__2173:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__316:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1 [Unsqueeze] inputs: [Not__316:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/concat [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/concat [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/concat:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__331 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/concat:0 [04/08/2022-14:45:40] [V] [TRT] Not__331 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__331 for ONNX node: Not__331 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__331:0 for ONNX tensor: Not__331:0 [04/08/2022-14:45:40] [V] [TRT] Not__331 [Not] outputs: [Not__331:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Cast__334 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__331:0 [04/08/2022-14:45:40] [V] [TRT] Cast__334 [Cast] inputs: [Not__331:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Cast__334 for ONNX node: Cast__334 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Cast__334:0 for ONNX tensor: Cast__334:0 [04/08/2022-14:45:40] [V] [TRT] Cast__334 [Cast] outputs: [Cast__334:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/All_ReduceSum__340 [ReduceSum] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Cast__334:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/All_ReduceSum__340 [ReduceSum] inputs: [Cast__334:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/All_ReduceSum__340 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/All_ReduceSum__340 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/All_ReduceSum__340:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/All_ReduceSum__340:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/All_ReduceSum__340 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/All_ReduceSum__340:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Greater__344 [Greater] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/All_ReduceSum__340:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] Greater__344 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_5/All_ReduceSum__340:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Greater__344 for ONNX node: Greater__344 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Greater__344:0 for ONNX tensor: Greater__344:0 [04/08/2022-14:45:40] [V] [TRT] Greater__344 [Greater] outputs: [Greater__344:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__347 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Greater__344:0 [04/08/2022-14:45:40] [V] [TRT] Not__347 [Not] inputs: [Greater__344:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__347 for ONNX node: Not__347 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__347:0 for ONNX tensor: Not__347:0 [04/08/2022-14:45:40] [V] [TRT] Not__347 [Not] outputs: [Not__347:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__347:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims [Unsqueeze] inputs: [Not__347:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/concat [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/concat [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/concat:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__356 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/concat:0 [04/08/2022-14:45:40] [V] [TRT] Not__356 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__356 for ONNX node: Not__356 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__356:0 for ONNX tensor: Not__356:0 [04/08/2022-14:45:40] [V] [TRT] Not__356 [Not] outputs: [Not__356:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Cast__359 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__356:0 [04/08/2022-14:45:40] [V] [TRT] Cast__359 [Cast] inputs: [Not__356:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Cast__359 for ONNX node: Cast__359 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Cast__359:0 for ONNX tensor: Cast__359:0 [04/08/2022-14:45:40] [V] [TRT] Cast__359 [Cast] outputs: [Cast__359:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/All_ReduceSum__365 [ReduceSum] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Cast__359:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/All_ReduceSum__365 [ReduceSum] inputs: [Cast__359:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/All_ReduceSum__365 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/All_ReduceSum__365 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/All_ReduceSum__365:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/All_ReduceSum__365:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/All_ReduceSum__365 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/All_ReduceSum__365:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Greater__369 [Greater] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/All_ReduceSum__365:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] Greater__369 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_5/All_ReduceSum__365:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Greater__369 for ONNX node: Greater__369 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Greater__369:0 for ONNX tensor: Greater__369:0 [04/08/2022-14:45:40] [V] [TRT] Greater__369 [Greater] outputs: [Greater__369:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__372 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Greater__369:0 [04/08/2022-14:45:40] [V] [TRT] Not__372 [Not] inputs: [Greater__369:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__372 for ONNX node: Not__372 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__372:0 for ONNX tensor: Not__372:0 [04/08/2022-14:45:40] [V] [TRT] Not__372 [Not] outputs: [Not__372:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Cast [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__372:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Cast [Cast] inputs: [Not__372:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Cast [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Cast:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Cast:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul_1 [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax [Softmax] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1/shape_Concat__2188 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1/shape_Concat__2188 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1/shape_Concat__2188 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1/shape_Concat__2188 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1/shape_Concat__2188:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1/shape_Concat__2188:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1/shape_Concat__2188 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1/shape_Concat__2188:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1__2189 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1/shape_Concat__2188:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1__2189 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1/shape_Concat__2188:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1__2189 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1__2189 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1__2189:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1__2189:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1__2189 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1__2189:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1__2189:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1__2189:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Shape__2174:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2/shape_Concat__2208 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2/shape_Concat__2208 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2/shape_Concat__2208 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2/shape_Concat__2208 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2/shape_Concat__2208:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2/shape_Concat__2208:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2/shape_Concat__2208 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2/shape_Concat__2208:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2__2219 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2/shape_Concat__2208:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2__2219 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2/shape_Concat__2208:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2__2219 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2__2219 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2__2219:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2__2219:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2__2219 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2__2219:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2__2219:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2__2219:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3__2220 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3__2220 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3__2220 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3__2220 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3__2220:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3__2220:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3__2220 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3__2220:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3__2220:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Shape_3__2220:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3/shape_Concat__2232 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3/shape_Concat__2232 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3/shape_Concat__2232 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3/shape_Concat__2232 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3/shape_Concat__2232:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3/shape_Concat__2232:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3/shape_Concat__2232 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3/shape_Concat__2232:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3__2233 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3/shape_Concat__2232:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3__2233 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3/shape_Concat__2232:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3__2233 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3__2233 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3__2233:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3__2233:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3__2233 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3__2233:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3__2233:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3__2233:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape__2234 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape__2234 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape__2234 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape__2234 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape__2234:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape__2234:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape__2234 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape__2234:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape__2234:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Shape__2234:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot__2241 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot__2241 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot__2241 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot__2241 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot__2241:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot__2241:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot__2241 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot__2241:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot__2241:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot__2241:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_5/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Square [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Square [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value/Minimum [Min] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value/Minimum [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value [Max] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Sqrt [Sqrt] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Sqrt [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/truediv [Div] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1 [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape__2242 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape__2242 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape__2242 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape__2242 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape__2242:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape__2242:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape__2242 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape__2242:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape__2242:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Shape__2242:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot__2249 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot__2249 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot__2249 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot__2249 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot__2249:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot__2249:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot__2249 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot__2249:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot__2249:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot__2249:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu [Relu] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape__2250 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape__2250 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape__2250 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape__2250 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape__2250:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape__2250:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape__2250 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape__2250:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape__2250:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Shape__2250:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot__2257 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot__2257 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot__2257 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot__2257 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot__2257:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot__2257:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot__2257 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot__2257:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot__2257:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot__2257:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_6/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_6/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Square [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Square [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value/Minimum [Min] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value/Minimum [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value [Max] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Sqrt [Sqrt] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Sqrt [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/truediv [Div] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1 [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape/shape_Concat__2355 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape/shape_Concat__2355 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape/shape_Concat__2355 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape/shape_Concat__2355 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape/shape_Concat__2355:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape/shape_Concat__2355:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape/shape_Concat__2355 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape/shape_Concat__2355:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape/shape_Concat__2355:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape/shape_Concat__2355:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_2__2282:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1__2298 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1__2298 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1__2298 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1__2298 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1__2298:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1__2298:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1__2298 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1__2298:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1__2298:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape_1__2298:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Transpose__3554 [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] Transpose__3554 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Transpose__3554 for ONNX node: Transpose__3554 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Transpose__3554:0 for ONNX tensor: Transpose__3554:0 [04/08/2022-14:45:40] [V] [TRT] Transpose__3554 [Transpose] outputs: [Transpose__3554:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Shape__3616 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] Shape__3616 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Shape__3616 for ONNX node: Shape__3616 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Shape__3616:0 for ONNX tensor: Shape__3616:0 [04/08/2022-14:45:40] [V] [TRT] Shape__3616 [Shape] outputs: [Shape__3616:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Gather__3620 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Shape__3616:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:40] [V] [TRT] Gather__3620 [Gather] inputs: [Shape__3616:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Gather__3620 for ONNX node: Gather__3620 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] Gather__3620 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape_1__2320:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1/shape_Concat__2339 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1/shape_Concat__2339 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1/shape_Concat__2339 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1/shape_Concat__2339 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1/shape_Concat__2339:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1/shape_Concat__2339:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1/shape_Concat__2339 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1/shape_Concat__2339:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1__2340 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1/shape_Concat__2339:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1__2340 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1/shape_Concat__2339:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1__2340 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1__2340 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1__2340:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1__2340:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1__2340 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1__2340:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Transpose__3554:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1__2340:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1 [Reshape] inputs: [Transpose__3554:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1__2340:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot__2273:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1__2319:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Shape__2357:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2/shape_Concat__2376 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2/shape_Concat__2376 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2/shape_Concat__2376 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2/shape_Concat__2376 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2/shape_Concat__2376:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2/shape_Concat__2376:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2/shape_Concat__2376 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2/shape_Concat__2376:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2__2387 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2/shape_Concat__2376:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2__2387 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2/shape_Concat__2376:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2__2387 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2__2387 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2__2387:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2__2387:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2__2387 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2__2387:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2__2387:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2__2387:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/truediv [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__372:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1 [Unsqueeze] inputs: [Not__372:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/concat [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/concat [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/concat:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__387 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/concat:0 [04/08/2022-14:45:40] [V] [TRT] Not__387 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__387 for ONNX node: Not__387 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__387:0 for ONNX tensor: Not__387:0 [04/08/2022-14:45:40] [V] [TRT] Not__387 [Not] outputs: [Not__387:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Cast__390 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__387:0 [04/08/2022-14:45:40] [V] [TRT] Cast__390 [Cast] inputs: [Not__387:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Cast__390 for ONNX node: Cast__390 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Cast__390:0 for ONNX tensor: Cast__390:0 [04/08/2022-14:45:40] [V] [TRT] Cast__390 [Cast] outputs: [Cast__390:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/All_ReduceSum__396 [ReduceSum] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Cast__390:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/All_ReduceSum__396 [ReduceSum] inputs: [Cast__390:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/All_ReduceSum__396 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/All_ReduceSum__396 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/All_ReduceSum__396:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/All_ReduceSum__396:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/All_ReduceSum__396 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/All_ReduceSum__396:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Greater__400 [Greater] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/All_ReduceSum__396:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] Greater__400 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_6/All_ReduceSum__396:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Greater__400 for ONNX node: Greater__400 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Greater__400:0 for ONNX tensor: Greater__400:0 [04/08/2022-14:45:40] [V] [TRT] Greater__400 [Greater] outputs: [Greater__400:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__403 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Greater__400:0 [04/08/2022-14:45:40] [V] [TRT] Not__403 [Not] inputs: [Greater__400:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__403 for ONNX node: Not__403 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__403:0 for ONNX tensor: Not__403:0 [04/08/2022-14:45:40] [V] [TRT] Not__403 [Not] outputs: [Not__403:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__403:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims [Unsqueeze] inputs: [Not__403:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/concat [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/concat [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/concat:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__412 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/concat:0 [04/08/2022-14:45:40] [V] [TRT] Not__412 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__412 for ONNX node: Not__412 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__412:0 for ONNX tensor: Not__412:0 [04/08/2022-14:45:40] [V] [TRT] Not__412 [Not] outputs: [Not__412:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Cast__415 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__412:0 [04/08/2022-14:45:40] [V] [TRT] Cast__415 [Cast] inputs: [Not__412:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Cast__415 for ONNX node: Cast__415 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Cast__415:0 for ONNX tensor: Cast__415:0 [04/08/2022-14:45:40] [V] [TRT] Cast__415 [Cast] outputs: [Cast__415:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/All_ReduceSum__421 [ReduceSum] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Cast__415:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/All_ReduceSum__421 [ReduceSum] inputs: [Cast__415:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/All_ReduceSum__421 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/All_ReduceSum__421 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/All_ReduceSum__421:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/All_ReduceSum__421:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/All_ReduceSum__421 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/All_ReduceSum__421:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Greater__425 [Greater] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/All_ReduceSum__421:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] Greater__425 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_6/All_ReduceSum__421:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Greater__425 for ONNX node: Greater__425 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Greater__425:0 for ONNX tensor: Greater__425:0 [04/08/2022-14:45:40] [V] [TRT] Greater__425 [Greater] outputs: [Greater__425:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__428 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Greater__425:0 [04/08/2022-14:45:40] [V] [TRT] Not__428 [Not] inputs: [Greater__425:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__428 for ONNX node: Not__428 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__428:0 for ONNX tensor: Not__428:0 [04/08/2022-14:45:40] [V] [TRT] Not__428 [Not] outputs: [Not__428:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Cast [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__428:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Cast [Cast] inputs: [Not__428:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Cast [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Cast:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Cast:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul_1 [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax [Softmax] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1/shape_Concat__2402 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1/shape_Concat__2402 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1/shape_Concat__2402 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1/shape_Concat__2402 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1/shape_Concat__2402:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1/shape_Concat__2402:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1/shape_Concat__2402 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1/shape_Concat__2402:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1__2403 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1/shape_Concat__2402:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1__2403 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1/shape_Concat__2402:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1__2403 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1__2403 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1__2403:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1__2403:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1__2403 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1__2403:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1__2403:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1__2403:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Shape__2388:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2/shape_Concat__2422 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2/shape_Concat__2422 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2/shape_Concat__2422 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2/shape_Concat__2422 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2/shape_Concat__2422:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2/shape_Concat__2422:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2/shape_Concat__2422 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2/shape_Concat__2422:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2__2433 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2/shape_Concat__2422:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2__2433 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2/shape_Concat__2422:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2__2433 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2__2433 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2__2433:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2__2433:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2__2433 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2__2433:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2__2433:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2__2433:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3__2434 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3__2434 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3__2434 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3__2434 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3__2434:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3__2434:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3__2434 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3__2434:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3__2434:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Shape_3__2434:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3/shape_Concat__2446 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3/shape_Concat__2446 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3/shape_Concat__2446 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3/shape_Concat__2446 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3/shape_Concat__2446:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3/shape_Concat__2446:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3/shape_Concat__2446 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3/shape_Concat__2446:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3__2447 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3/shape_Concat__2446:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3__2447 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3/shape_Concat__2446:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3__2447 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3__2447 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3__2447:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3__2447:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3__2447 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3__2447:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3__2447:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3__2447:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape__2448 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape__2448 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape__2448 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape__2448 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape__2448:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape__2448:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape__2448 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape__2448:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape__2448:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Shape__2448:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot__2455 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot__2455 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot__2455 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot__2455 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot__2455:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot__2455:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot__2455 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot__2455:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot__2455:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot__2455:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_6/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Square [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Square [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value/Minimum [Min] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value/Minimum [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value [Max] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Sqrt [Sqrt] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Sqrt [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/truediv [Div] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1 [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape__2456 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape__2456 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape__2456 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape__2456 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape__2456:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape__2456:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape__2456 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape__2456:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape__2456:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Shape__2456:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot__2463 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot__2463 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot__2463 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot__2463 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot__2463:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot__2463:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot__2463 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot__2463:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot__2463:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot__2463:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu [Relu] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape__2464 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape__2464 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape__2464 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape__2464 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape__2464:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape__2464:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape__2464 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape__2464:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape__2464:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Shape__2464:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot__2471 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot__2471 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot__2471 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot__2471 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot__2471:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot__2471:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot__2471 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot__2471:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot__2471:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot__2471:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_7/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_7/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Square [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Square [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value/Minimum [Min] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value/Minimum [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value [Max] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Sqrt [Sqrt] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Sqrt [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/truediv [Div] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1 [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2/shape_Concat__2510 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2/shape_Concat__2510 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2/shape_Concat__2510 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2/shape_Concat__2510 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2/shape_Concat__2510:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2/shape_Concat__2510:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2/shape_Concat__2510 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2/shape_Concat__2510:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2/shape_Concat__2510:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2/shape_Concat__2510:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/Shape__2472:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1__2512 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1__2512 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1__2512 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1__2512 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1__2512:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1__2512:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1__2512 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1__2512:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1__2512:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape_1__2512:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Transpose__3560 [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] Transpose__3560 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Transpose__3560 for ONNX node: Transpose__3560 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Transpose__3560:0 for ONNX tensor: Transpose__3560:0 [04/08/2022-14:45:40] [V] [TRT] Transpose__3560 [Transpose] outputs: [Transpose__3560:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Shape__3621 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] Shape__3621 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Shape__3621 for ONNX node: Shape__3621 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Shape__3621:0 for ONNX tensor: Shape__3621:0 [04/08/2022-14:45:40] [V] [TRT] Shape__3621 [Shape] outputs: [Shape__3621:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Gather__3625 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Shape__3621:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:40] [V] [TRT] Gather__3625 [Gather] inputs: [Shape__3621:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Gather__3625 for ONNX node: Gather__3625 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] Gather__3625 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape_1__2534:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1/shape_Concat__2553 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1/shape_Concat__2553 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1/shape_Concat__2553 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1/shape_Concat__2553 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1/shape_Concat__2553:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1/shape_Concat__2553:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1/shape_Concat__2553 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1/shape_Concat__2553:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1__2554 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1/shape_Concat__2553:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1__2554 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1/shape_Concat__2553:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1__2554 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1__2554 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1__2554:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1__2554:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1__2554 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1__2554:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Transpose__3560:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1__2554:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1 [Reshape] inputs: [Transpose__3560:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1__2554:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot__2479:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2__2511:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Shape__2571:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2/shape_Concat__2590 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2/shape_Concat__2590 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2/shape_Concat__2590 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2/shape_Concat__2590 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2/shape_Concat__2590:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2/shape_Concat__2590:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2/shape_Concat__2590 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2/shape_Concat__2590:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2__2601 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2/shape_Concat__2590:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2__2601 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2/shape_Concat__2590:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2__2601 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2__2601 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2__2601:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2__2601:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2__2601 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2__2601:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2__2601:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2__2601:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/truediv [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__428:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1 [Unsqueeze] inputs: [Not__428:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/concat [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/concat [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/concat:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__443 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/concat:0 [04/08/2022-14:45:40] [V] [TRT] Not__443 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__443 for ONNX node: Not__443 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__443:0 for ONNX tensor: Not__443:0 [04/08/2022-14:45:40] [V] [TRT] Not__443 [Not] outputs: [Not__443:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Cast__446 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__443:0 [04/08/2022-14:45:40] [V] [TRT] Cast__446 [Cast] inputs: [Not__443:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Cast__446 for ONNX node: Cast__446 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Cast__446:0 for ONNX tensor: Cast__446:0 [04/08/2022-14:45:40] [V] [TRT] Cast__446 [Cast] outputs: [Cast__446:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/All_ReduceSum__452 [ReduceSum] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Cast__446:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/All_ReduceSum__452 [ReduceSum] inputs: [Cast__446:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/All_ReduceSum__452 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/All_ReduceSum__452 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/All_ReduceSum__452:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/All_ReduceSum__452:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/All_ReduceSum__452 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/All_ReduceSum__452:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Greater__456 [Greater] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/All_ReduceSum__452:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] Greater__456 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_7/All_ReduceSum__452:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Greater__456 for ONNX node: Greater__456 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Greater__456:0 for ONNX tensor: Greater__456:0 [04/08/2022-14:45:40] [V] [TRT] Greater__456 [Greater] outputs: [Greater__456:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__459 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Greater__456:0 [04/08/2022-14:45:40] [V] [TRT] Not__459 [Not] inputs: [Greater__456:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__459 for ONNX node: Not__459 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__459:0 for ONNX tensor: Not__459:0 [04/08/2022-14:45:40] [V] [TRT] Not__459 [Not] outputs: [Not__459:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__459:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1 [Unsqueeze] inputs: [Not__459:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/concat [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/concat [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/concat:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__468 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/concat:0 [04/08/2022-14:45:40] [V] [TRT] Not__468 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__468 for ONNX node: Not__468 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__468:0 for ONNX tensor: Not__468:0 [04/08/2022-14:45:40] [V] [TRT] Not__468 [Not] outputs: [Not__468:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Cast__471 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__468:0 [04/08/2022-14:45:40] [V] [TRT] Cast__471 [Cast] inputs: [Not__468:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Cast__471 for ONNX node: Cast__471 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Cast__471:0 for ONNX tensor: Cast__471:0 [04/08/2022-14:45:40] [V] [TRT] Cast__471 [Cast] outputs: [Cast__471:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/All_ReduceSum__477 [ReduceSum] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Cast__471:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/All_ReduceSum__477 [ReduceSum] inputs: [Cast__471:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/All_ReduceSum__477 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/All_ReduceSum__477 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/All_ReduceSum__477:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/All_ReduceSum__477:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/All_ReduceSum__477 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/All_ReduceSum__477:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Greater__481 [Greater] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/All_ReduceSum__477:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] Greater__481 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_7/All_ReduceSum__477:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Greater__481 for ONNX node: Greater__481 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Greater__481:0 for ONNX tensor: Greater__481:0 [04/08/2022-14:45:40] [V] [TRT] Greater__481 [Greater] outputs: [Greater__481:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__484 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Greater__481:0 [04/08/2022-14:45:40] [V] [TRT] Not__484 [Not] inputs: [Greater__481:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__484 for ONNX node: Not__484 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__484:0 for ONNX tensor: Not__484:0 [04/08/2022-14:45:40] [V] [TRT] Not__484 [Not] outputs: [Not__484:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Cast [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__484:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Cast [Cast] inputs: [Not__484:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Cast [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Cast:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Cast:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul_1 [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax [Softmax] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1/shape_Concat__2616 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1/shape_Concat__2616 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1/shape_Concat__2616 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1/shape_Concat__2616 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1/shape_Concat__2616:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1/shape_Concat__2616:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1/shape_Concat__2616 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1/shape_Concat__2616:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1__2617 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1/shape_Concat__2616:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1__2617 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1/shape_Concat__2616:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1__2617 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1__2617 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1__2617:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1__2617:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1__2617 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1__2617:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1__2617:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1__2617:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Shape__2602:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2/shape_Concat__2636 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2/shape_Concat__2636 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2/shape_Concat__2636 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2/shape_Concat__2636 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2/shape_Concat__2636:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2/shape_Concat__2636:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2/shape_Concat__2636 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2/shape_Concat__2636:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2__2647 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2/shape_Concat__2636:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2__2647 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2/shape_Concat__2636:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2__2647 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2__2647 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2__2647:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2__2647:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2__2647 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2__2647:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2__2647:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2__2647:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3__2648 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3__2648 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3__2648 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3__2648 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3__2648:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3__2648:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3__2648 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3__2648:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3__2648:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Shape_3__2648:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3/shape_Concat__2660 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3/shape_Concat__2660 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3/shape_Concat__2660 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3/shape_Concat__2660 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3/shape_Concat__2660:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3/shape_Concat__2660:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3/shape_Concat__2660 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3/shape_Concat__2660:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3__2661 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3/shape_Concat__2660:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3__2661 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3/shape_Concat__2660:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3__2661 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3__2661 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3__2661:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3__2661:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3__2661 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3__2661:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3__2661:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3__2661:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape__2662 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape__2662 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape__2662 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape__2662 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape__2662:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape__2662:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape__2662 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape__2662:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape__2662:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Shape__2662:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot__2669 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot__2669 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot__2669 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot__2669 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot__2669:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot__2669:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot__2669 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot__2669:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot__2669:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot__2669:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_7/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Square [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Square [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value/Minimum [Min] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value/Minimum [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value [Max] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Sqrt [Sqrt] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Sqrt [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/truediv [Div] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1 [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape__2670 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape__2670 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape__2670 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape__2670 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape__2670:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape__2670:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape__2670 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape__2670:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape__2670:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Shape__2670:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot__2677 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot__2677 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot__2677 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot__2677 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot__2677:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot__2677:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot__2677 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot__2677:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot__2677:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot__2677:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu [Relu] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape__2678 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape__2678 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape__2678 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape__2678 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape__2678:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape__2678:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape__2678 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape__2678:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape__2678:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Shape__2678:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot__2685 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot__2685 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot__2685 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot__2685 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot__2685:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot__2685:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot__2685 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot__2685:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot__2685:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot__2685:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_8/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub [Sub] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Square [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Square [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Square:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value/Minimum [Min] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value/Minimum [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value [Max] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value/Minimum:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Sqrt [Sqrt] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Sqrt [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/truediv [Div] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Sqrt:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/mul [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/truediv:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/mul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/mul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1 [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/mul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape/shape_Concat__2783 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape/shape_Concat__2783 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape/shape_Concat__2783 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape/shape_Concat__2783 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape/shape_Concat__2783:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape/shape_Concat__2783:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape/shape_Concat__2783 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape/shape_Concat__2783:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape/shape_Concat__2783:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape/shape_Concat__2783:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape__2769:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/GatherV2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/concat_1 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/GatherV2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/concat_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/concat_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1__2726 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1__2726 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1__2726 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1__2726 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1__2726:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1__2726:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1__2726 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1__2726:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1__2726:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape_1__2726:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Transpose__3566 [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] Transpose__3566 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Transpose__3566 for ONNX node: Transpose__3566 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Transpose__3566:0 for ONNX tensor: Transpose__3566:0 [04/08/2022-14:45:40] [V] [TRT] Transpose__3566 [Transpose] outputs: [Transpose__3566:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Shape__3626 [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] Shape__3626 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Shape__3626 for ONNX node: Shape__3626 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Shape__3626:0 for ONNX tensor: Shape__3626:0 [04/08/2022-14:45:40] [V] [TRT] Shape__3626 [Shape] outputs: [Shape__3626:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Gather__3630 [Gather] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Shape__3626:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:40] [V] [TRT] Gather__3630 [Gather] inputs: [Shape__3626:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Gather__3630 for ONNX node: Gather__3630 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] Gather__3630 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape_1__2748:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1/shape_Concat__2767 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1/shape_Concat__2767 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1/shape_Concat__2767 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1/shape_Concat__2767 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1/shape_Concat__2767:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1/shape_Concat__2767:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1/shape_Concat__2767 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1/shape_Concat__2767:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1__2768 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1/shape_Concat__2767:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1__2768 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1/shape_Concat__2767:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1__2768 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1__2768 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1__2768:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1__2768:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1__2768 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1__2768:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Transpose__3566:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1__2768:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1 [Reshape] inputs: [Transpose__3566:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1__2768:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot__2701:0 -> (3)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/BiasAdd [Add] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/BiasAdd [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/BiasAdd:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2__2725:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose [Transpose] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape [Shape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice [Slice] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Shape__2785:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2/shape_Concat__2804 [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2/shape_Concat__2804 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2/shape_Concat__2804 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2/shape_Concat__2804 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2/shape_Concat__2804:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2/shape_Concat__2804:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2/shape_Concat__2804 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2/shape_Concat__2804:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2__2815 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2/shape_Concat__2804:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2__2815 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2/shape_Concat__2804:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2__2815 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2__2815 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2__2815:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2__2815:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2__2815 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2__2815:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/MatMul [MatMul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/MatMul [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/MatMul:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2__2815:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2__2815:0 -> (4)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/truediv [Mul] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/truediv [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/truediv:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__484:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1 [Unsqueeze] inputs: [Not__484:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/concat [Concat] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/concat [04/08/2022-14:45:40] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/concat:0 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Not__499 [Not] [04/08/2022-14:45:40] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/concat:0 [04/08/2022-14:45:40] [V] [TRT] Not__499 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: Not__499 for ONNX node: Not__499 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Not__499:0 for ONNX tensor: Not__499:0 [04/08/2022-14:45:40] [V] [TRT] Not__499 [Not] outputs: [Not__499:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: Cast__502 [Cast] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Not__499:0 [04/08/2022-14:45:40] [V] [TRT] Cast__502 [Cast] inputs: [Not__499:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:40] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:40] [V] [TRT] Registering layer: Cast__502 for ONNX node: Cast__502 [04/08/2022-14:45:40] [V] [TRT] Registering tensor: Cast__502:0 for ONNX tensor: Cast__502:0 [04/08/2022-14:45:40] [V] [TRT] Cast__502 [Cast] outputs: [Cast__502:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:40] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/All_ReduceSum__508 [ReduceSum] [04/08/2022-14:45:40] [V] [TRT] Searching for input: Cast__502:0 [04/08/2022-14:45:40] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:40] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/All_ReduceSum__508 [ReduceSum] inputs: [Cast__502:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:40] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/All_ReduceSum__508 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/All_ReduceSum__508 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/All_ReduceSum__508:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/All_ReduceSum__508:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/All_ReduceSum__508 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/All_ReduceSum__508:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Greater__512 [Greater] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/All_ReduceSum__508:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] Greater__512 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_8/All_ReduceSum__508:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Greater__512 for ONNX node: Greater__512 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Greater__512:0 for ONNX tensor: Greater__512:0 [04/08/2022-14:45:41] [V] [TRT] Greater__512 [Greater] outputs: [Greater__512:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Not__515 [Not] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Greater__512:0 [04/08/2022-14:45:41] [V] [TRT] Not__515 [Not] inputs: [Greater__512:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Not__515 for ONNX node: Not__515 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Not__515:0 for ONNX tensor: Not__515:0 [04/08/2022-14:45:41] [V] [TRT] Not__515 [Not] outputs: [Not__515:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims [Unsqueeze] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__515:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims [Unsqueeze] inputs: [Not__515:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/concat [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/concat [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/concat:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Not__524 [Not] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/concat:0 [04/08/2022-14:45:41] [V] [TRT] Not__524 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Not__524 for ONNX node: Not__524 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Not__524:0 for ONNX tensor: Not__524:0 [04/08/2022-14:45:41] [V] [TRT] Not__524 [Not] outputs: [Not__524:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Cast__527 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__524:0 [04/08/2022-14:45:41] [V] [TRT] Cast__527 [Cast] inputs: [Not__524:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: Cast__527 for ONNX node: Cast__527 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Cast__527:0 for ONNX tensor: Cast__527:0 [04/08/2022-14:45:41] [V] [TRT] Cast__527 [Cast] outputs: [Cast__527:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/All_ReduceSum__533 [ReduceSum] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Cast__527:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/All_ReduceSum__533 [ReduceSum] inputs: [Cast__527:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/All_ReduceSum__533 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/All_ReduceSum__533 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/All_ReduceSum__533:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/All_ReduceSum__533:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/All_ReduceSum__533 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/All_ReduceSum__533:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Greater__537 [Greater] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/All_ReduceSum__533:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] Greater__537 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_8/All_ReduceSum__533:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Greater__537 for ONNX node: Greater__537 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Greater__537:0 for ONNX tensor: Greater__537:0 [04/08/2022-14:45:41] [V] [TRT] Greater__537 [Greater] outputs: [Greater__537:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Not__540 [Not] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Greater__537:0 [04/08/2022-14:45:41] [V] [TRT] Not__540 [Not] inputs: [Greater__537:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Not__540 for ONNX node: Not__540 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Not__540:0 for ONNX tensor: Not__540:0 [04/08/2022-14:45:41] [V] [TRT] Not__540 [Not] outputs: [Not__540:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Cast [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__540:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Cast [Cast] inputs: [Not__540:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Cast [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Cast:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims [Unsqueeze] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Cast:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/sub [Sub] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/sub [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul_1 [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/truediv:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax [Softmax] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1/shape_Concat__2830 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1/shape_Concat__2830 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1/shape_Concat__2830 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1/shape_Concat__2830 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1/shape_Concat__2830:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1/shape_Concat__2830:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1/shape_Concat__2830 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1/shape_Concat__2830:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1__2831 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1/shape_Concat__2830:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1__2831 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1/shape_Concat__2830:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1__2831 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1__2831 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1__2831:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1__2831:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1__2831 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1__2831:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1__2831:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1__2831:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Shape__2816:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2/shape_Concat__2850 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2/shape_Concat__2850 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2/shape_Concat__2850 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2/shape_Concat__2850 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2/shape_Concat__2850:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2/shape_Concat__2850:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2/shape_Concat__2850 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2/shape_Concat__2850:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2__2861 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2/shape_Concat__2850:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2__2861 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2/shape_Concat__2850:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2__2861 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2__2861 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2__2861:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2__2861:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2__2861 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2__2861:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2__2861:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2__2861:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3 [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3__2862 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3__2862 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3__2862 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3__2862 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3__2862:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3__2862:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3__2862 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3__2862:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice_3 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3__2862:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Shape_3__2862:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3/shape_Concat__2874 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3/shape_Concat__2874 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3/shape_Concat__2874 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3/shape_Concat__2874 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3/shape_Concat__2874:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3/shape_Concat__2874:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3/shape_Concat__2874 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3/shape_Concat__2874:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3__2875 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3/shape_Concat__2874:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3__2875 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3/shape_Concat__2874:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3__2875 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3__2875 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3__2875:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3__2875:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3__2875 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3__2875:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3__2875:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3__2875:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape__2876 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape__2876 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape__2876 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape__2876 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape__2876:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape__2876:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape__2876 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape__2876:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape__2876:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Shape__2876:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot__2883 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot__2883 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot__2883 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot__2883 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot__2883:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot__2883:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot__2883 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot__2883:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot__2883:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot__2883:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_8/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub [Sub] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Square [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Square [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value/Minimum [Min] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value/Minimum [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value [Max] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Sqrt [Sqrt] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Sqrt [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/truediv [Div] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/truediv [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/truediv:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/mul [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/truediv:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/mul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/mul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1 [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/mul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape__2884 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape__2884 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape__2884 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape__2884 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape__2884:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape__2884:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape__2884 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape__2884:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape__2884:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Shape__2884:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot__2891 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot__2891 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot__2891 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot__2891 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot__2891:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot__2891:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot__2891 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot__2891:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot__2891:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot__2891:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu [Relu] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape__2892 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape__2892 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape__2892 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape__2892 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape__2892:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape__2892:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape__2892 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape__2892:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape__2892:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Shape__2892:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot__2899 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot__2899 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot__2899 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot__2899 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot__2899:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot__2899:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot__2899 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot__2899:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot__2899:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot__2899:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_9/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub [Sub] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Square [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Square [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value/Minimum [Min] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value/Minimum [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value [Max] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Sqrt [Sqrt] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Sqrt [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/truediv [Div] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/truediv [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/truediv:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/mul [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/truediv:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/mul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/mul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1 [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/mul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1/shape_Concat__2960 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1/shape_Concat__2960 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1/shape_Concat__2960 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1/shape_Concat__2960 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1/shape_Concat__2960:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1/shape_Concat__2960:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1/shape_Concat__2960 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1/shape_Concat__2960:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1/shape_Concat__2960:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1/shape_Concat__2960:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/Shape__2900:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose [Transpose] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1__2940 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1__2940 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1__2940 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1__2940 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1__2940:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1__2940:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1__2940 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1__2940:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1__2940:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape_1__2940:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Transpose__3569 [Transpose] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] Transpose__3569 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Transpose__3569 for ONNX node: Transpose__3569 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Transpose__3569:0 for ONNX tensor: Transpose__3569:0 [04/08/2022-14:45:41] [V] [TRT] Transpose__3569 [Transpose] outputs: [Transpose__3569:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Shape__3631 [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] Shape__3631 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Shape__3631 for ONNX node: Shape__3631 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Shape__3631:0 for ONNX tensor: Shape__3631:0 [04/08/2022-14:45:41] [V] [TRT] Shape__3631 [Shape] outputs: [Shape__3631:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Gather__3635 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Shape__3631:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:41] [V] [TRT] Gather__3635 [Gather] inputs: [Shape__3631:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: Gather__3635 for ONNX node: Gather__3635 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1:0 [04/08/2022-14:45:41] [V] [TRT] Gather__3635 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape_1__2962:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1/shape_Concat__2981 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1/shape_Concat__2981 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1/shape_Concat__2981 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1/shape_Concat__2981 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1/shape_Concat__2981:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1/shape_Concat__2981:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1/shape_Concat__2981 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1/shape_Concat__2981:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1__2982 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1/shape_Concat__2981:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1__2982 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1/shape_Concat__2981:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1__2982 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1__2982 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1__2982:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1__2982:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1__2982 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1__2982:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Transpose__3569:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1__2982:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1 [Reshape] inputs: [Transpose__3569:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1__2982:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot__2907:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2__2939:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose [Transpose] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Shape__2999:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2/shape_Concat__3018 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2/shape_Concat__3018 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2/shape_Concat__3018 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2/shape_Concat__3018 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2/shape_Concat__3018:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2/shape_Concat__3018:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2/shape_Concat__3018 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2/shape_Concat__3018:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2__3029 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2/shape_Concat__3018:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2__3029 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2/shape_Concat__3018:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2__3029 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2__3029 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2__3029:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2__3029:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2__3029 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2__3029:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2__3029:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2__3029:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/truediv [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/truediv [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/truediv:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__540:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1 [Unsqueeze] inputs: [Not__540:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/concat [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/concat [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/concat:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Not__555 [Not] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/concat:0 [04/08/2022-14:45:41] [V] [TRT] Not__555 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Not__555 for ONNX node: Not__555 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Not__555:0 for ONNX tensor: Not__555:0 [04/08/2022-14:45:41] [V] [TRT] Not__555 [Not] outputs: [Not__555:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Cast__558 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__555:0 [04/08/2022-14:45:41] [V] [TRT] Cast__558 [Cast] inputs: [Not__555:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: Cast__558 for ONNX node: Cast__558 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Cast__558:0 for ONNX tensor: Cast__558:0 [04/08/2022-14:45:41] [V] [TRT] Cast__558 [Cast] outputs: [Cast__558:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/All_ReduceSum__564 [ReduceSum] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Cast__558:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/All_ReduceSum__564 [ReduceSum] inputs: [Cast__558:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/All_ReduceSum__564 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/All_ReduceSum__564 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/All_ReduceSum__564:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/All_ReduceSum__564:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/All_ReduceSum__564 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/All_ReduceSum__564:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Greater__568 [Greater] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/All_ReduceSum__564:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] Greater__568 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_9/All_ReduceSum__564:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Greater__568 for ONNX node: Greater__568 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Greater__568:0 for ONNX tensor: Greater__568:0 [04/08/2022-14:45:41] [V] [TRT] Greater__568 [Greater] outputs: [Greater__568:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Not__571 [Not] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Greater__568:0 [04/08/2022-14:45:41] [V] [TRT] Not__571 [Not] inputs: [Greater__568:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Not__571 for ONNX node: Not__571 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Not__571:0 for ONNX tensor: Not__571:0 [04/08/2022-14:45:41] [V] [TRT] Not__571 [Not] outputs: [Not__571:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims [Unsqueeze] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__571:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims [Unsqueeze] inputs: [Not__571:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/concat [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/concat [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/concat:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Not__580 [Not] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/concat:0 [04/08/2022-14:45:41] [V] [TRT] Not__580 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Not__580 for ONNX node: Not__580 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Not__580:0 for ONNX tensor: Not__580:0 [04/08/2022-14:45:41] [V] [TRT] Not__580 [Not] outputs: [Not__580:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Cast__583 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__580:0 [04/08/2022-14:45:41] [V] [TRT] Cast__583 [Cast] inputs: [Not__580:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: Cast__583 for ONNX node: Cast__583 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Cast__583:0 for ONNX tensor: Cast__583:0 [04/08/2022-14:45:41] [V] [TRT] Cast__583 [Cast] outputs: [Cast__583:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/All_ReduceSum__589 [ReduceSum] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Cast__583:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/All_ReduceSum__589 [ReduceSum] inputs: [Cast__583:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/All_ReduceSum__589 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/All_ReduceSum__589 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/All_ReduceSum__589:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/All_ReduceSum__589:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/All_ReduceSum__589 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/All_ReduceSum__589:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Greater__593 [Greater] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/All_ReduceSum__589:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] Greater__593 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_9/All_ReduceSum__589:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Greater__593 for ONNX node: Greater__593 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Greater__593:0 for ONNX tensor: Greater__593:0 [04/08/2022-14:45:41] [V] [TRT] Greater__593 [Greater] outputs: [Greater__593:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Not__596 [Not] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Greater__593:0 [04/08/2022-14:45:41] [V] [TRT] Not__596 [Not] inputs: [Greater__593:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Not__596 for ONNX node: Not__596 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Not__596:0 for ONNX tensor: Not__596:0 [04/08/2022-14:45:41] [V] [TRT] Not__596 [Not] outputs: [Not__596:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Cast [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__596:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Cast [Cast] inputs: [Not__596:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Cast [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Cast:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims [Unsqueeze] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Cast:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/sub [Sub] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/sub [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul_1 [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/truediv:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax [Softmax] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1/shape_Concat__3044 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1/shape_Concat__3044 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1/shape_Concat__3044 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1/shape_Concat__3044 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1/shape_Concat__3044:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1/shape_Concat__3044:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1/shape_Concat__3044 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1/shape_Concat__3044:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1__3045 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1/shape_Concat__3044:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1__3045 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1/shape_Concat__3044:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1__3045 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1__3045 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1__3045:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1__3045:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1__3045 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1__3045:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1__3045:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1__3045:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Shape__3030:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2/shape_Concat__3064 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2/shape_Concat__3064 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2/shape_Concat__3064 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2/shape_Concat__3064 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2/shape_Concat__3064:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2/shape_Concat__3064:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2/shape_Concat__3064 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2/shape_Concat__3064:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2__3075 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2/shape_Concat__3064:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2__3075 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2/shape_Concat__3064:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2__3075 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2__3075 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2__3075:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2__3075:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2__3075 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2__3075:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2__3075:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2__3075:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3 [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3__3076 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3__3076 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3__3076 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3__3076 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3__3076:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3__3076:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3__3076 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3__3076:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice_3 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3__3076:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Shape_3__3076:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3/shape_Concat__3088 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3/shape_Concat__3088 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3/shape_Concat__3088 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3/shape_Concat__3088 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3/shape_Concat__3088:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3/shape_Concat__3088:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3/shape_Concat__3088 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3/shape_Concat__3088:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3__3089 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3/shape_Concat__3088:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3__3089 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3/shape_Concat__3088:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3__3089 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3__3089 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3__3089:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3__3089:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3__3089 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3__3089:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3__3089:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3__3089:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape__3090 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape__3090 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape__3090 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape__3090 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape__3090:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape__3090:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape__3090 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape__3090:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape__3090:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Shape__3090:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot__3097 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot__3097 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot__3097 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot__3097 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot__3097:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot__3097:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot__3097 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot__3097:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot__3097:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot__3097:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_9/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub [Sub] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Square [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Square [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value/Minimum [Min] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value/Minimum [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value [Max] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Sqrt [Sqrt] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Sqrt [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/truediv [Div] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/truediv [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/truediv:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/mul [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/truediv:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/mul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/mul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1 [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/mul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape__3098 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape__3098 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape__3098 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape__3098 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape__3098:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape__3098:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape__3098 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape__3098:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape__3098:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Shape__3098:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot__3105 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot__3105 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot__3105 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot__3105 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot__3105:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot__3105:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot__3105 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot__3105:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot__3105:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot__3105:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu [Relu] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape__3106 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape__3106 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape__3106 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape__3106 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape__3106:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape__3106:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape__3106 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape__3106:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape__3106:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Shape__3106:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot__3113 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot__3113 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot__3113 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot__3113 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot__3113:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot__3113:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot__3113 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot__3113:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot__3113:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot__3113:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_10/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub [Sub] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Square [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Square [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value/Minimum [Min] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value/Minimum [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value [Max] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Sqrt [Sqrt] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Sqrt [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/truediv [Div] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/truediv [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/truediv:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/mul [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/truediv:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/mul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/mul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1 [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/mul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1/shape_Concat__3174 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1/shape_Concat__3174 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1/shape_Concat__3174 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1/shape_Concat__3174 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1/shape_Concat__3174:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1/shape_Concat__3174:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1/shape_Concat__3174 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1/shape_Concat__3174:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1/shape_Concat__3174:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1/shape_Concat__3174:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape__3197:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose [Transpose] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1 [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1__3154 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1__3154 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1__3154 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1__3154 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1__3154:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1__3154:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1__3154 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1__3154:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_3 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1__3154:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape_1__3154:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Transpose__3578 [Transpose] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] Transpose__3578 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Transpose__3578 for ONNX node: Transpose__3578 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Transpose__3578:0 for ONNX tensor: Transpose__3578:0 [04/08/2022-14:45:41] [V] [TRT] Transpose__3578 [Transpose] outputs: [Transpose__3578:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Shape__3636 [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] Shape__3636 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Shape__3636 for ONNX node: Shape__3636 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Shape__3636:0 for ONNX tensor: Shape__3636:0 [04/08/2022-14:45:41] [V] [TRT] Shape__3636 [Shape] outputs: [Shape__3636:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Gather__3640 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Shape__3636:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: Const__3624 [04/08/2022-14:45:41] [V] [TRT] Gather__3640 [Gather] inputs: [Shape__3636:0 -> (4)[INT32]], [Const__3624 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: Gather__3640 for ONNX node: Gather__3640 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1:0 [04/08/2022-14:45:41] [V] [TRT] Gather__3640 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_2 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape_1__3176:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1/shape_Concat__3195 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1/shape_Concat__3195 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1/shape_Concat__3195 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1/shape_Concat__3195 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1/shape_Concat__3195:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1/shape_Concat__3195:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1/shape_Concat__3195 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1/shape_Concat__3195:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1__3196 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1/shape_Concat__3195:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1__3196 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1/shape_Concat__3195:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1__3196 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1__3196 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1__3196:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1__3196:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1__3196 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1__3196:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Transpose__3578:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1__3196:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1 [Reshape] inputs: [Transpose__3578:0 -> (-1, 12, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1__3196:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot__3137:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1__3175:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose [Transpose] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Shape__3213:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2/shape_Concat__3232 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2/shape_Concat__3232 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2/shape_Concat__3232 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2/shape_Concat__3232 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2/shape_Concat__3232:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2/shape_Concat__3232:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2/shape_Concat__3232 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2/shape_Concat__3232:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2__3243 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2/shape_Concat__3232:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2__3243 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2/shape_Concat__3232:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2__3243 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2__3243 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2__3243:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2__3243:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2__3243 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2__3243:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_shape__4154 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1:0 -> (-1, 12, 64, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2__3243:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/MatMul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2__3243:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/truediv [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2:0 -> (-1, 12, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/truediv [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/truediv:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__596:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1 [Unsqueeze] inputs: [Not__596:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/concat [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/ExpandDims_1:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/concat [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/concat:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Not__611 [Not] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/concat:0 [04/08/2022-14:45:41] [V] [TRT] Not__611 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Not__611 for ONNX node: Not__611 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Not__611:0 for ONNX tensor: Not__611:0 [04/08/2022-14:45:41] [V] [TRT] Not__611 [Not] outputs: [Not__611:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Cast__614 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__611:0 [04/08/2022-14:45:41] [V] [TRT] Cast__614 [Cast] inputs: [Not__611:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: Cast__614 for ONNX node: Cast__614 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Cast__614:0 for ONNX tensor: Cast__614:0 [04/08/2022-14:45:41] [V] [TRT] Cast__614 [Cast] outputs: [Cast__614:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/All_ReduceSum__620 [ReduceSum] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Cast__614:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/All_ReduceSum__620 [ReduceSum] inputs: [Cast__614:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/All_ReduceSum__620 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/All_ReduceSum__620 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/All_ReduceSum__620:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/All_ReduceSum__620:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/All_ReduceSum__620 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/All_ReduceSum__620:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Greater__624 [Greater] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/All_ReduceSum__620:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] Greater__624 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_10/All_ReduceSum__620:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Greater__624 for ONNX node: Greater__624 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Greater__624:0 for ONNX tensor: Greater__624:0 [04/08/2022-14:45:41] [V] [TRT] Greater__624 [Greater] outputs: [Greater__624:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Not__627 [Not] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Greater__624:0 [04/08/2022-14:45:41] [V] [TRT] Not__627 [Not] inputs: [Greater__624:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Not__627 for ONNX node: Not__627 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Not__627:0 for ONNX tensor: Not__627:0 [04/08/2022-14:45:41] [V] [TRT] Not__627 [Not] outputs: [Not__627:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims [Unsqueeze] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__627:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims [Unsqueeze] inputs: [Not__627:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/concat [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/concat [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/concat:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Not__636 [Not] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/concat:0 [04/08/2022-14:45:41] [V] [TRT] Not__636 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Not__636 for ONNX node: Not__636 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Not__636:0 for ONNX tensor: Not__636:0 [04/08/2022-14:45:41] [V] [TRT] Not__636 [Not] outputs: [Not__636:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Cast__639 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__636:0 [04/08/2022-14:45:41] [V] [TRT] Cast__639 [Cast] inputs: [Not__636:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: Cast__639 for ONNX node: Cast__639 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Cast__639:0 for ONNX tensor: Cast__639:0 [04/08/2022-14:45:41] [V] [TRT] Cast__639 [Cast] outputs: [Cast__639:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/All_ReduceSum__645 [ReduceSum] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Cast__639:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/All_ReduceSum__645 [ReduceSum] inputs: [Cast__639:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/All_ReduceSum__645 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/All_ReduceSum__645 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/All_ReduceSum__645:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/All_ReduceSum__645:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/All_ReduceSum__645 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/All_ReduceSum__645:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Greater__649 [Greater] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/All_ReduceSum__645:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] Greater__649 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_10/All_ReduceSum__645:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Greater__649 for ONNX node: Greater__649 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Greater__649:0 for ONNX tensor: Greater__649:0 [04/08/2022-14:45:41] [V] [TRT] Greater__649 [Greater] outputs: [Greater__649:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: Not__652 [Not] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Greater__649:0 [04/08/2022-14:45:41] [V] [TRT] Not__652 [Not] inputs: [Greater__649:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: Not__652 for ONNX node: Not__652 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: Not__652:0 for ONNX tensor: Not__652:0 [04/08/2022-14:45:41] [V] [TRT] Not__652 [Not] outputs: [Not__652:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Cast [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: Not__652:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Cast [Cast] inputs: [Not__652:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Cast for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Cast [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Cast:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims [Unsqueeze] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Cast:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/sub [Sub] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/sub [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul_1 [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/sub:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/truediv:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/truediv:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/mul_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax [Softmax] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax [Softmax] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/add:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax [Softmax] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_2 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_ends__1106 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244:0 -> (4)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_ends__1106 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1/shape_Concat__3258 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1/shape_Concat__3258 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_3:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_2:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1/shape_Concat__3258 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1/shape_Concat__3258 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1/shape_Concat__3258:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1/shape_Concat__3258:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1/shape_Concat__3258 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1/shape_Concat__3258:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1__3259 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1/shape_Concat__3258:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1__3259 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1/shape_Concat__3258:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1__3259 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1__3259 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1__3259:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1__3259:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1__3259 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1__3259:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1__3259:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1__3259:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_1 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244:0 -> (4)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Shape__3244:0 -> (4)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2/shape_Concat__3278 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3668 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2/shape_Concat__3278 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice:0 -> (1)[INT32]], [const_fold_opt__3668 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/strided_slice_1:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2/shape_Concat__3278 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2/shape_Concat__3278 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2/shape_Concat__3278:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2/shape_Concat__3278:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2/shape_Concat__3278 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2/shape_Concat__3278:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2__3289 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2/shape_Concat__3278:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2__3289 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2/shape_Concat__3278:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2__3289 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2__3289 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2__3289:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2__3289:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2__3289 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2__3289:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_shape__4124 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape:0 -> (-1, 12, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2__3289:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/MatMul:0 -> (-1, 12, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2__3289:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1 [Transpose] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1 [Transpose] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2:0 -> (-1, 12, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1 [Transpose] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3 [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3 [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3 [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3__3290 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3__3290 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3__3290 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3__3290 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3__3290:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3__3290:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3__3290 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3__3290:0 -> (4)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice_3 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3__3290:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Shape_3__3290:0 -> (4)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3/shape_Concat__3302 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice_3:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3/shape_Concat__3302 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/strided_slice_3:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3/shape_Concat__3302 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3/shape_Concat__3302 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3/shape_Concat__3302:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3/shape_Concat__3302:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3/shape_Concat__3302 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3/shape_Concat__3302:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3__3303 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3/shape_Concat__3302:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3__3303 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3/shape_Concat__3302:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3__3303 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3__3303 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3__3303:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3__3303:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3__3303 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3__3303:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3 [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3__3303:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1:0 -> (-1, -1, 12, 64)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3__3303:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape__3304 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape__3304 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape__3304 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape__3304 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape__3304:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape__3304:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape__3304 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape__3304:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape__3304:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Shape__3304:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot__3311 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot__3311 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot__3311 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot__3311 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot__3311:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot__3311:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot__3311 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot__3311:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape_1:0 -> (768, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot__3311:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot__3311:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_10/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub [Sub] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Square [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Square [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value/Minimum [Min] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value/Minimum [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value [Max] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Sqrt [Sqrt] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Sqrt [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/truediv [Div] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/truediv [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/truediv:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/mul [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/truediv:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_1/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/mul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/mul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1 [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/mul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape__3312 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape__3312 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape__3312 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape__3312 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape__3312:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape__3312:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape__3312 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape__3312:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape__3312:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Shape__3312:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_1/dense_4/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot__3319 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot__3319 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot__3319 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot__3319 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot__3319:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot__3319:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot__3319 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot__3319:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_9/dense_4/Tensordot/Reshape_1:0 -> (768, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot__3319:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot/MatMul:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot__3319:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Tensordot:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_4/BiasAdd/ReadVariableOp:0 -> (3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu [Relu] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu [Relu] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/BiasAdd:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu [Relu] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape__3320 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape__3320 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape__3320 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape__3320 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape__3320:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape__3320:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape__3320 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape__3320:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape__3320:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Shape__3320:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_10/dense_5/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot__3327 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot__3327 [Cast] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot__3327 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot__3327 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot__3327:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot__3327:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot__3327 [Cast] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot__3327:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_4/Relu:0 -> (-1, -1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_8/dense_5/Tensordot/Reshape_shape__4153 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/Reshape:0 -> (-1, 3072)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_4/dense_5/Tensordot/Reshape_1:0 -> (3072, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot__3327:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot/MatMul:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot__3327:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/Tensordot:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_5/dense_5/BiasAdd/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm_11/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward_11/dense_5/BiasAdd:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub [Sub] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/add:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub [Sub] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Square [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Square [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Square for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Square [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Square:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Square [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean_1 [GlobalAveragePool] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Square:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean_1 [GlobalAveragePool] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Square:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean_1 [GlobalAveragePool] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Mean_1:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add/y:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value/Minimum [Min] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value/Minimum [Min] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Const_1:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value/Minimum for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value/Minimum [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value/Minimum:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value/Minimum [Min] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value [Max] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value/Minimum:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value [Max] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value/Minimum:0 -> (-1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value [Max] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Sqrt [Sqrt] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Sqrt [Sqrt] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/clip_by_value:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Sqrt for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Sqrt [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Sqrt:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Sqrt [Sqrt] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/truediv [Div] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Sqrt:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/truediv [Div] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/sub:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/Sqrt:0 -> (-1, -1, 1)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/truediv for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/truediv [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/truediv:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/truediv [Div] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/truediv:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/mul [Mul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/truediv:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/mul [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/truediv:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/mul/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/mul for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/mul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/mul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/mul [Mul] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/mul:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1 [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/mul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/mul:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm/add_1/ReadVariableOp:0 -> (768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape [Shape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1:0 -> (-1, -1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape__3328 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape__3328 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape__3328 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape__3328 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape__3328:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape__3328:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape__3328 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape__3328:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape__3328:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Shape__3328:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/GatherV2 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/concat_1 [Concat] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/GatherV2:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Const_2:0 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Const_2:0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/concat_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot__3335 [Cast] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/concat_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot__3335 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot__3335 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot__3335 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot__3335:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot__3335:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot__3335 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot__3335:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Norm_11/add_1:0 -> (-1, -1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/Reshape_shape__4173 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/MatMul [MatMul] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape:0 -> (-1, 768)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape_1:0 -> (768, 128)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape_1:0 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/Reshape_1:0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/MatMul [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/MatMul:0 -> (-1, 128)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot [Reshape] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/MatMul:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot__3335:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot/MatMul:0 -> (-1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot__3335:0 -> (3)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd [Add] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/Tensordot:0 -> (-1, -1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd/ReadVariableOp:0 -> (128)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:41] [V] [TRT] Searching for input: end_masked__3353 [04/08/2022-14:45:41] [V] [TRT] Searching for input: slice_axes__3341 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd:0 -> (-1, -1, 128)[FLOAT]], [begin_masked__3342 -> (2)[INT32]], [end_masked__3353 -> (2)[INT32]], [slice_axes__3341 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1 [04/08/2022-14:45:41] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:41] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_7 [Slice] [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1:0 [04/08/2022-14:45:41] [V] [TRT] Searching for input: begin_masked__3396 [04/08/2022-14:45:41] [V] [TRT] Searching for input: end_masked__3353 [04/08/2022-14:45:41] [V] [TRT] Searching for input: slice_axes__3341 [04/08/2022-14:45:41] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 [04/08/2022-14:45:41] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_7 [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1:0 -> (-1, -1, 64)[FLOAT]], [begin_masked__3396 -> (2)[INT32]], [end_masked__3353 -> (2)[INT32]], [slice_axes__3341 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 -> (2)[INT32]], [04/08/2022-14:45:41] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_7 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_7 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_7:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_7:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_7 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_7:0 -> (-1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3348 [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_7:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3348 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_7:0 -> (-1, -1, 32)[FLOAT]], [const_fold_opt__4087 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (_, _, 32), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3348 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3348 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3348:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3348:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3348 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3348:0 -> (-1, -1, 32, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_6 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:42] [V] [TRT] Searching for input: end_masked__3353 [04/08/2022-14:45:42] [V] [TRT] Searching for input: slice_axes__3341 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_6 [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1:0 -> (-1, -1, 64)[FLOAT]], [begin_masked__3342 -> (2)[INT32]], [end_masked__3353 -> (2)[INT32]], [slice_axes__3341 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 -> (2)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_6 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_6 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_6:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_6:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_6 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_6:0 -> (-1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg_1 [Neg] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_6:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg_1 [Neg] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_6:0 -> (-1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg_1 [Neg] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg_1:0 -> (-1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3346 [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg_1:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3346 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg_1:0 -> (-1, -1, 32)[FLOAT]], [const_fold_opt__4087 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (_, _, 32), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3346 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3346 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3346:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3346:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3346 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3346:0 -> (-1, -1, 32, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Concat__3349 [Concat] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3346:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3348:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Concat__3349 [Concat] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3346:0 -> (-1, -1, 32, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Unsqueeze__3348:0 -> (-1, -1, 32, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Concat__3349 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Concat__3349 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Concat__3349:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Concat__3349:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Concat__3349 [Concat] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Concat__3349:0 -> (-1, -1, 32, 2)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1 [Shape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1__3350 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1__3350 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1__3350 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1__3350 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1__3350:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1__3350:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1__3350 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1__3350:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1__3351 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1__3350:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1__3351 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape_1__3350:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1__3351 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1__3351 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1__3351:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1__3351:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1__3351 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1__3351:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1 [Reshape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Concat__3349:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1__3351:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_1_Concat__3349:0 -> (-1, -1, 32, 2)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1__3351:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: begin_masked__3396 [04/08/2022-14:45:42] [V] [TRT] Searching for input: end_masked__3353 [04/08/2022-14:45:42] [V] [TRT] Searching for input: slice_axes__3341 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd:0 -> (-1, -1, 128)[FLOAT]], [begin_masked__3396 -> (2)[INT32]], [end_masked__3353 -> (2)[INT32]], [slice_axes__3341 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 -> (2)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: begin_masked__3396 [04/08/2022-14:45:42] [V] [TRT] Searching for input: end_masked__3353 [04/08/2022-14:45:42] [V] [TRT] Searching for input: slice_axes__3341 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5 [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice:0 -> (-1, -1, 64)[FLOAT]], [begin_masked__3396 -> (2)[INT32]], [end_masked__3353 -> (2)[INT32]], [slice_axes__3341 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 -> (2)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5:0 -> (-1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3364 [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3364 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5:0 -> (-1, -1, 32)[FLOAT]], [const_fold_opt__4087 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (_, _, 32), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3364 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3364 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3364:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3364:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3364 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3364:0 -> (-1, -1, 32, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_4 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:42] [V] [TRT] Searching for input: end_masked__3353 [04/08/2022-14:45:42] [V] [TRT] Searching for input: slice_axes__3341 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_4 [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice:0 -> (-1, -1, 64)[FLOAT]], [begin_masked__3342 -> (2)[INT32]], [end_masked__3353 -> (2)[INT32]], [slice_axes__3341 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 -> (2)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_4 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_4 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_4:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_4:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_4 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_4:0 -> (-1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg [Neg] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_4:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg [Neg] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_4:0 -> (-1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg [Neg] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg:0 -> (-1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3362 [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3362 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Neg:0 -> (-1, -1, 32)[FLOAT]], [const_fold_opt__4087 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (_, _, 32), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3362 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3362 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3362:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3362:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3362 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3362:0 -> (-1, -1, 32, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Concat__3365 [Concat] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3362:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3364:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Concat__3365 [Concat] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3362:0 -> (-1, -1, 32, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Unsqueeze__3364:0 -> (-1, -1, 32, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Concat__3365 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Concat__3365 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Concat__3365:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Concat__3365:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Concat__3365 [Concat] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Concat__3365:0 -> (-1, -1, 32, 2)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape [Shape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape__3366 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape__3366 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape__3366 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape__3366 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape__3366:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape__3366:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape__3366 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape__3366:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape__3367 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape__3366:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape__3367 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Shape__3366:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape__3367 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape__3367 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape__3367:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape__3367:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape__3367 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape__3367:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape [Reshape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Concat__3365:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape__3367:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/stack_Concat__3365:0 -> (-1, -1, 32, 2)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape__3367:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Shape [Shape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd:0 -> (-1, -1, 128)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Shape for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Shape [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Shape:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Shape:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372 [Squeeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372 [Squeeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1:0 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (1,), squeezing to: () [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372 [Squeeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372:0 -> ()[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange [Range] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange/start:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange/delta:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange [Range] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange/start:0 -> ()[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372:0 -> ()[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange/delta:0 -> ()[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange [Range] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange:0 -> (-1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange:0 -> (-1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast:0 -> (-1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Unsqueeze__3375 [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] Unsqueeze__3375 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast:0 -> (-1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (_,), unsqueezing to: (_, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: Unsqueeze__3375 for ONNX node: Unsqueeze__3375 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Unsqueeze__3375:0 for ONNX tensor: Unsqueeze__3375:0 [04/08/2022-14:45:42] [V] [TRT] Unsqueeze__3375 [Unsqueeze] outputs: [Unsqueeze__3375:0 -> (1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Unsqueeze__3375:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: zero_const__739 [04/08/2022-14:45:42] [V] [TRT] Searching for input: end_masked__3378 [04/08/2022-14:45:42] [V] [TRT] Searching for input: zero_const__739 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2/stack_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2 [Slice] inputs: [Unsqueeze__3375:0 -> (1, -1)[FLOAT]], [zero_const__739 -> (1)[INT32]], [end_masked__3378 -> (1)[INT32]], [zero_const__739 -> (1)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2/stack_2:0 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2:0 -> (1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_fold_opt__4058 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2:0 -> (1, -1)[FLOAT]], [const_fold_opt__4058 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (1, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims:0 -> (1, -1, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul [Mul] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul [Mul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims:0 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims_2:0 -> (1, 1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims_2:0 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims_2:0 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul:0 -> (1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Sin [Sin] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Sin [Sin] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul:0 -> (1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Sin for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Sin [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Sin:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Sin:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Sin [Sin] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Sin:0 -> (1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384 [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Sin:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Sin:0 -> (1, -1, 32)[FLOAT]], [const_fold_opt__4087 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (1, _, 32), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 -> (1, -1, 32, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos [Cos] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos [Cos] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul:0 -> (1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos [Cos] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos:0 -> (1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386 [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos:0 -> (1, -1, 32)[FLOAT]], [const_fold_opt__4087 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (1, _, 32), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386:0 -> (1, -1, 32, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387 [Concat] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387 [Concat] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 -> (1, -1, 32, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386:0 -> (1, -1, 32, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387 [Concat] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> (1, -1, 32, 2)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Unsqueeze__3391 [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Unsqueeze__3391 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_1__3372:0 -> ()[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (), unsqueezing to: (1,) [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Unsqueeze__3391 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Unsqueeze__3391 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Unsqueeze__3391:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Unsqueeze__3391:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Unsqueeze__3391 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Unsqueeze__3391:0 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Concat__3394 [Concat] [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_fold_opt__3765 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Unsqueeze__3391:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_fold_opt__3897 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Concat__3394 [Concat] inputs: [const_fold_opt__3765 -> (1)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Unsqueeze__3391:0 -> (1)[INT32]], [const_fold_opt__3897 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Concat__3394 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Concat__3394 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Concat__3394:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Concat__3394:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Concat__3394 [Concat] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Concat__3394:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape__3395 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Concat__3394:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape__3395 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape/shape_Concat__3394:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape__3395 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape__3395 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape__3395:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape__3395:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape__3395 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape__3395:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape [Reshape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape__3395:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> (1, -1, 32, 2)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape__3395:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0 -> (1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: begin_masked__3396 [04/08/2022-14:45:42] [V] [TRT] Searching for input: end_masked__3353 [04/08/2022-14:45:42] [V] [TRT] Searching for input: slice_axes__3341 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0 -> (1, -1, 64)[FLOAT]], [begin_masked__3396 -> (2)[INT32]], [end_masked__3353 -> (2)[INT32]], [slice_axes__3341 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 -> (2)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 -> (1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [Split] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [Split] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 -> (1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_234 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_235 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_236 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_237 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_238 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_239 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_240 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_241 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_242 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_243 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_244 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_245 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_246 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_247 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_248 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_249 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_250 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_251 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_252 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_253 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_254 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_255 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_256 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_257 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_258 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_259 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_260 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_261 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_262 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_263 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_264 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:0 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:1 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:2 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:2 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:3 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:3 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:4 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:4 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:5 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:5 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:6 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:6 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:7 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:7 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:8 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:8 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:9 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:9 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:10 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:10 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:11 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:11 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:12 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:12 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:13 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:13 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:14 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:14 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:15 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:15 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:16 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:16 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:17 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:17 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:18 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:18 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:19 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:19 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:20 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:20 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:21 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:21 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:22 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:22 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:23 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:23 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:24 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:24 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:25 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:25 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:26 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:26 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:27 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:27 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:28 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:28 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:29 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:29 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:30 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:30 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:31 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:31 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 [Split] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:0 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:1 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:2 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:3 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:4 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:5 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:6 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:7 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:8 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:9 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:10 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:11 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:12 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:13 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:14 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:15 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:16 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:17 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:18 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:19 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:20 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:21 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:22 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:23 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:24 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:25 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:26 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:27 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:28 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:29 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:30 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:31 -> (1, -1, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1 [Concat] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:1 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:1 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:2 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:2 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:3 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:3 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:4 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:4 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:5 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:5 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:6 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:6 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:7 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:7 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:8 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:8 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:9 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:9 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:10 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:10 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:11 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:11 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:12 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:12 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:13 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:13 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:14 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:14 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:15 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:15 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:16 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:16 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:17 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:17 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:18 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:18 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:19 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:19 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:20 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:20 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:21 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:21 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:22 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:22 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:23 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:23 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:24 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:24 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:25 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:25 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:26 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:26 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:27 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:27 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:28 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:28 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:29 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:29 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:30 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:30 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:31 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:31 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:0 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:0 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:1 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:1 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:2 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:2 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:3 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:3 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:4 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:4 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:5 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:5 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:6 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:6 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:7 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:7 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:8 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:8 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:9 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:9 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:10 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:10 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:11 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:11 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:12 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:12 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:13 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:13 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:14 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:14 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:15 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:15 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:16 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:16 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:17 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:17 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:18 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:18 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:19 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:19 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:20 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:20 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:21 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:21 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:22 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:22 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:23 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:23 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:24 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:24 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:25 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:25 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:26 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:26 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:27 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:27 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:28 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:28 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:29 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:29 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:30 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:30 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:31 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:31 -> (1, -1, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 -> (1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_3 [Mul] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_3 [Mul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape_1:0 -> (-1, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 -> (1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_3 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_3 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_3:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_3 [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_3:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_1 [Mul] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_1 [Mul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Reshape:0 -> (-1, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 -> (1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_1 [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_1:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:42] [V] [TRT] Searching for input: end_masked__3353 [04/08/2022-14:45:42] [V] [TRT] Searching for input: slice_axes__3341 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2 [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0 -> (1, -1, 64)[FLOAT]], [begin_masked__3342 -> (2)[INT32]], [end_masked__3353 -> (2)[INT32]], [slice_axes__3341 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_5/stack_2:0 -> (2)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 -> (1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [Split] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [Split] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 -> (1, -1, 32)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_277 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_278 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_279 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_280 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_281 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_282 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_283 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_284 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_285 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_286 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_287 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_288 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_289 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_290 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_291 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_292 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_293 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_294 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_295 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_296 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_297 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_298 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_299 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_300 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_301 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_302 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_303 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_304 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_305 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_306 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_307 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:0 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:1 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:2 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:2 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:3 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:3 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:4 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:4 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:5 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:5 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:6 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:6 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:7 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:7 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:8 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:8 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:9 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:9 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:10 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:10 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:11 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:11 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:12 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:12 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:13 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:13 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:14 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:14 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:15 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:15 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:16 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:16 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:17 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:17 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:18 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:18 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:19 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:19 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:20 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:20 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:21 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:21 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:22 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:22 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:23 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:23 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:24 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:24 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:25 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:25 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:26 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:26 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:27 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:27 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:28 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:28 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:29 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:29 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:30 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:30 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:31 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:31 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/split [Split] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:0 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:1 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:2 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:3 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:4 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:5 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:6 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:7 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:8 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:9 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:10 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:11 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:12 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:13 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:14 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:15 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:16 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:17 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:18 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:19 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:20 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:21 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:22 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:23 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:24 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:25 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:26 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:27 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:28 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:29 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:30 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:31 -> (1, -1, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat [Concat] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:1 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:1 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:2 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:2 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:3 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:3 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:4 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:4 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:5 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:5 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:6 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:6 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:7 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:7 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:8 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:8 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:9 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:9 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:10 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:10 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:11 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:11 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:12 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:12 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:13 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:13 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:14 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:14 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:15 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:15 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:16 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:16 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:17 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:17 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:18 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:18 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:19 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:19 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:20 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:20 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:21 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:21 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:22 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:22 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:23 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:23 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:24 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:24 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:25 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:25 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:26 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:26 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:27 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:27 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:28 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:28 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:29 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:29 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:30 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:30 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:31 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:31 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat [Concat] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:0 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:0 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:1 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:1 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:2 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:2 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:3 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:3 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:4 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:4 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:5 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:5 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:6 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:6 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:7 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:7 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:8 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:8 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:9 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:9 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:10 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:10 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:11 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:11 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:12 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:12 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:13 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:13 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:14 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:14 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:15 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:15 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:16 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:16 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:17 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:17 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:18 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:18 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:19 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:19 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:20 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:20 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:21 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:21 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:22 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:22 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:23 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:23 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:24 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:24 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:25 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:25 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:26 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:26 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:27 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:27 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:28 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:28 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:29 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:29 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:30 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:30 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:31 -> (1, -1, 1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:31 -> (1, -1, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat [Concat] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 -> (1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_2 [Mul] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_2 [Mul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_1:0 -> (-1, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 -> (1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_2 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_2 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_2 [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_2:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_1 [Add] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_2:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_3:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_1 [Add] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_2:0 -> (-1, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_3:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_1 [Add] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_1:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose [Transpose] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_1:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose:0 -> (-1, 64, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1 [Shape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1 [Shape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose:0 -> (-1, 64, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1 [Shape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1__3402 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1__3402 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1__3402 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1__3402 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1__3402:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1__3402:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1__3402 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1__3402:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_3 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1__3402:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_3 [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape_1__3402:0 -> (3)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_fold_opt__4087 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_3 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_3 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_3:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_3 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1 [Reshape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1_shape__4164 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose:0 -> (-1, 64, -1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1_shape__4164 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1:0 -> (-1, 64, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul [Mul] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul [Mul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice:0 -> (-1, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 -> (1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add [Add] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add [Add] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul:0 -> (-1, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_1:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add [Add] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape [Shape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_1 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_1 [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421:0 -> (3)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_1 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_1:0 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice [Slice] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Shape__3421:0 -> (3)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice:0 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2/shape_Concat__3438 [Concat] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_1:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_3:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2/shape_Concat__3438 [Concat] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_1:0 -> (1)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/strided_slice_3:0 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2/shape_Concat__3438 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2/shape_Concat__3438 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2/shape_Concat__3438:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2/shape_Concat__3438:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2/shape_Concat__3438 [Concat] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2/shape_Concat__3438:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2__3447 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2/shape_Concat__3438:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2__3447 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2/shape_Concat__3438:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2__3447 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2__3447 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2__3447:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2__3447:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2__3447 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2__3447:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape [Reshape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_shape__4161 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add:0 -> (-1, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_shape__4161 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape:0 -> (-1, -1, 64)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/MatMul [MatMul] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape:0 -> (-1, -1, 64)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1:0 -> (-1, 64, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/MatMul for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/MatMul [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/MatMul:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/MatMul:0 -> (-1, -1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2 [Reshape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/MatMul:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2__3447:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/MatMul:0 -> (-1, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2__3447:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2:0 -> (-1, -1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv [Mul] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv [Mul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2:0 -> (-1, -1, -1)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/truediv_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv:0 -> (-1, -1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Unsqueeze__3449 [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:42] [V] [TRT] Unsqueeze__3449 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv:0 -> (-1, -1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (_, _, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: Unsqueeze__3449 for ONNX node: Unsqueeze__3449 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Unsqueeze__3449:0 for ONNX tensor: Unsqueeze__3449:0 [04/08/2022-14:45:42] [V] [TRT] Unsqueeze__3449 [Unsqueeze] outputs: [Unsqueeze__3449:0 -> (-1, 1, -1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Unsqueeze__3449:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: begin_masked__3396 [04/08/2022-14:45:42] [V] [TRT] Searching for input: end_masked__3353 [04/08/2022-14:45:42] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8/stack_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8 [Slice] inputs: [Unsqueeze__3449:0 -> (-1, 1, -1, -1)[FLOAT]], [begin_masked__3396 -> (2)[INT32]], [end_masked__3353 -> (2)[INT32]], [begin_masked__3342 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8/stack_2:0 -> (2)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8:0 -> (-1, 1, -1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/GatherV2 [Gather] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: begin_masked__3342 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/GatherV2 [Gather] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Shape__3368:0 -> (3)[INT32]], [begin_masked__3342 -> (2)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Using Gather axis: 0 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/GatherV2 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/GatherV2 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/GatherV2:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/GatherV2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/GatherV2 [Gather] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/GatherV2:0 -> (2)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/concat_1 [Concat] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/GatherV2:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Const_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/concat_1 [Concat] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/GatherV2:0 -> (2)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Const_2:0 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Const_2:0 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Const_2:0 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/concat_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/concat_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/concat_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/concat_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/concat_1 [Concat] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot__3463 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/concat_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot__3463 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/concat_1:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot__3463 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot__3463 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot__3463:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot__3463:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot__3463 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot__3463:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape [Reshape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape_shape__4182 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape [Reshape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd:0 -> (-1, -1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/Embedding-Mapping/Tensordot/Reshape_shape__4182 -> (2)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape [Reshape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape:0 -> (-1, 128)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/MatMul [MatMul] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/MatMul [MatMul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape:0 -> (-1, 128)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape_1:0 -> (128, 20)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape_1:0 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/Reshape_1:0 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/MatMul for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/MatMul [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/MatMul:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/MatMul:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/MatMul [MatMul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/MatMul:0 -> (-1, 20)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot [Reshape] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/MatMul:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot__3463:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot [Reshape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot/MatMul:0 -> (-1, 20)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot__3463:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot [Reshape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot:0 -> (-1, -1, 20)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd [Add] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd [Add] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/Tensordot:0 -> (-1, -1, 20)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd/ReadVariableOp:0 -> (20)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd/ReadVariableOp:0 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd/ReadVariableOp:0 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd [Add] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd:0 -> (-1, -1, 20)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1 [Mul] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: ConstantFolding/StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1_recip:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1 [Mul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd:0 -> (-1, -1, 20)[FLOAT]], [ConstantFolding/StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1_recip:0 -> ()[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: ConstantFolding/StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1_recip:0 for ONNX node: ConstantFolding/StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1_recip:0 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1 [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1:0 -> (-1, -1, 20)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose [Transpose] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose [Transpose] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1:0 -> (-1, -1, 20)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose [Transpose] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose:0 -> (-1, 20, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Unsqueeze__3472 [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:42] [V] [TRT] Unsqueeze__3472 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose:0 -> (-1, 20, -1)[FLOAT]], [const_fold_opt__4087 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (_, 20, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: Unsqueeze__3472 for ONNX node: Unsqueeze__3472 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Unsqueeze__3472:0 for ONNX tensor: Unsqueeze__3472:0 [04/08/2022-14:45:42] [V] [TRT] Unsqueeze__3472 [Unsqueeze] outputs: [Unsqueeze__3472:0 -> (-1, 20, -1, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Unsqueeze__3472:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: begin_masked__3475 [04/08/2022-14:45:42] [V] [TRT] Searching for input: end_masked__3476 [04/08/2022-14:45:42] [V] [TRT] Searching for input: slice_axes__3477 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10/stack_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10 [Slice] inputs: [Unsqueeze__3472:0 -> (-1, 20, -1, 1)[FLOAT]], [begin_masked__3475 -> (4)[INT32]], [end_masked__3476 -> (4)[INT32]], [slice_axes__3477 -> (4)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10/stack_2:0 -> (4)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10:0 -> (-1, 10, -1, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Unsqueeze__3465 [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:42] [V] [TRT] Unsqueeze__3465 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum_1/transpose:0 -> (-1, 20, -1)[FLOAT]], [const_starts__2572 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (_, 20, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: Unsqueeze__3465 for ONNX node: Unsqueeze__3465 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Unsqueeze__3465:0 for ONNX tensor: Unsqueeze__3465:0 [04/08/2022-14:45:42] [V] [TRT] Unsqueeze__3465 [Unsqueeze] outputs: [Unsqueeze__3465:0 -> (-1, 20, 1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9 [Slice] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Unsqueeze__3465:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: begin_masked__3468 [04/08/2022-14:45:42] [V] [TRT] Searching for input: end_masked__3469 [04/08/2022-14:45:42] [V] [TRT] Searching for input: slice_axes__3470 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9/stack_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9 [Slice] inputs: [Unsqueeze__3465:0 -> (-1, 20, 1, -1)[FLOAT]], [begin_masked__3468 -> (3)[INT32]], [end_masked__3469 -> (3)[INT32]], [slice_axes__3470 -> (3)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9/stack_2:0 -> (3)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9 [Slice] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9:0 -> (-1, 10, 1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_2 [Add] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_2 [Add] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_8:0 -> (-1, 1, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_9:0 -> (-1, 10, 1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_2 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_2 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_2:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_2 [Add] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_2:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_3 [Add] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_2:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_3 [Add] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_2:0 -> (-1, 10, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_10:0 -> (-1, 10, -1, 1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_3 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_3 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_3:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_3 [Add] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_3:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Not__652:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims [Unsqueeze] inputs: [Not__652:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/concat [Concat] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/concat [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/concat:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Not__667 [Not] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/concat:0 [04/08/2022-14:45:42] [V] [TRT] Not__667 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: Not__667 for ONNX node: Not__667 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Not__667:0 for ONNX tensor: Not__667:0 [04/08/2022-14:45:42] [V] [TRT] Not__667 [Not] outputs: [Not__667:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Cast__670 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Not__667:0 [04/08/2022-14:45:42] [V] [TRT] Cast__670 [Cast] inputs: [Not__667:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: Cast__670 for ONNX node: Cast__670 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Cast__670:0 for ONNX tensor: Cast__670:0 [04/08/2022-14:45:42] [V] [TRT] Cast__670 [Cast] outputs: [Cast__670:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/All_ReduceSum__676 [ReduceSum] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Cast__670:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/All_ReduceSum__676 [ReduceSum] inputs: [Cast__670:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/All_ReduceSum__676 for ONNX node: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/All_ReduceSum__676 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/All_ReduceSum__676:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/All_ReduceSum__676:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/All_ReduceSum__676 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/All_ReduceSum__676:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Greater__680 [Greater] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/All_ReduceSum__676:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:42] [V] [TRT] Greater__680 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Add_11/All_ReduceSum__676:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: Greater__680 for ONNX node: Greater__680 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Greater__680:0 for ONNX tensor: Greater__680:0 [04/08/2022-14:45:42] [V] [TRT] Greater__680 [Greater] outputs: [Greater__680:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Not__683 [Not] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Greater__680:0 [04/08/2022-14:45:42] [V] [TRT] Not__683 [Not] inputs: [Greater__680:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: Not__683 for ONNX node: Not__683 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Not__683:0 for ONNX tensor: Not__683:0 [04/08/2022-14:45:42] [V] [TRT] Not__683 [Not] outputs: [Not__683:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Not__683:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims [Unsqueeze] inputs: [Not__683:0 -> (-1, -1)[BOOL]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/concat [Concat] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/concat [Concat] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims:0 -> (1, -1, -1)[BOOL]], [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/ExpandDims:0 -> (1, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/concat for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/concat [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/concat:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/concat:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/concat [Concat] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Not__692 [Not] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/concat:0 [04/08/2022-14:45:42] [V] [TRT] Not__692 [Not] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/concat:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: Not__692 for ONNX node: Not__692 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Not__692:0 for ONNX tensor: Not__692:0 [04/08/2022-14:45:42] [V] [TRT] Not__692 [Not] outputs: [Not__692:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Cast__695 [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Not__692:0 [04/08/2022-14:45:42] [V] [TRT] Cast__695 [Cast] inputs: [Not__692:0 -> (2, -1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: Cast__695 for ONNX node: Cast__695 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Cast__695:0 for ONNX tensor: Cast__695:0 [04/08/2022-14:45:42] [V] [TRT] Cast__695 [Cast] outputs: [Cast__695:0 -> (2, -1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/All_ReduceSum__701 [ReduceSum] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Cast__695:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/All_ReduceSum__701 [ReduceSum] inputs: [Cast__695:0 -> (2, -1, -1)[FLOAT]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/All_ReduceSum__701 for ONNX node: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/All_ReduceSum__701 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/All_ReduceSum__701:0 for ONNX tensor: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/All_ReduceSum__701:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/All_ReduceSum__701 [ReduceSum] outputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/All_ReduceSum__701:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Greater__705 [Greater] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/All_ReduceSum__701:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 [04/08/2022-14:45:42] [V] [TRT] Greater__705 [Greater] inputs: [StatefulPartitionedCall/model_2/Transformer-FeedForward-Add_11/All_ReduceSum__701:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention-Norm/Const:0 -> ()[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: Greater__705 for ONNX node: Greater__705 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Greater__705:0 for ONNX tensor: Greater__705:0 [04/08/2022-14:45:42] [V] [TRT] Greater__705 [Greater] outputs: [Greater__705:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: Not__708 [Not] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Greater__705:0 [04/08/2022-14:45:42] [V] [TRT] Not__708 [Not] inputs: [Greater__705:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Registering layer: Not__708 for ONNX node: Not__708 [04/08/2022-14:45:42] [V] [TRT] Registering tensor: Not__708:0 for ONNX tensor: Not__708:0 [04/08/2022-14:45:42] [V] [TRT] Not__708 [Not] outputs: [Not__708:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Cast [Cast] [04/08/2022-14:45:42] [V] [TRT] Searching for input: Not__708:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Cast [Cast] inputs: [Not__708:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:42] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:42] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Cast for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Cast [04/08/2022-14:45:42] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Cast:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Cast:0 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/Cast [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Cast:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:42] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims [Unsqueeze] [04/08/2022-14:45:42] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/Cast:0 [04/08/2022-14:45:42] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:42] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/Cast:0 -> (-1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Original shape: (_, _), unsqueezing to: (_, _, _) [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3 [Unsqueeze] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_starts__807 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_1 [Sub] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_1 [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_1 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_1:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_1 [Sub] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_7 [Mul] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_1:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_7 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_1:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_7 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_7 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_7:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_7:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_7 [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_7:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1 [Unsqueeze] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: const_fold_opt__4087 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1 [Unsqueeze] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims:0 -> (-1, 1, -1)[FLOAT]], [const_fold_opt__4087 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Original shape: (_, 1, _), unsqueezing to: (_, _, _, _) [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1 [Unsqueeze] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1:0 -> (-1, 1, -1, 1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub [Sub] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1:0 -> (-1, 1, -1, 1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub [Sub] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub:0 -> (-1, 1, -1, 1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_5 [Mul] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_5 [Mul] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Cast_1/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub:0 -> (-1, 1, -1, 1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_5 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_5 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_5:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_5:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_5 [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_5:0 -> (-1, 1, -1, 1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_4 [Mul] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_3:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_4 [Mul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_3:0 -> (-1, 10, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_1:0 -> (-1, 1, -1, 1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_4 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_4 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_4:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_4:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_4 [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_4:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_4 [Add] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_4:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_5:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_4 [Add] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_4:0 -> (-1, 10, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_5:0 -> (-1, 1, -1, 1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_4 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_4 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_4:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_4:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_4 [Add] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_4:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_6 [Mul] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_4:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_6 [Mul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_4:0 -> (-1, 10, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ExpandDims_3:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_6 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_6 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_6:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_6:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_6 [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_6:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5 [Add] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_6:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_7:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5 [Add] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_6:0 -> (-1, 10, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_7:0 -> (-1, 1, 1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5 [Add] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape [Shape] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape [Shape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape [Shape] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape__3478 [Cast] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape__3478 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape:0 -> (4)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape__3478 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape__3478 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape__3478:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape__3478:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape__3478 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape__3478:0 -> (4)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like__3479 [Cast] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape__3478:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like__3479 [Cast] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like/Shape__3478:0 -> (4)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Casting to type: int32 [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like__3479 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like__3479 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like__3479:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like__3479:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like__3479 [Cast] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like__3479:0 -> (4)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like [Expand] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like__3479:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like [Expand] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like__3479:0 -> (4)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like [Expand] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Shape__3480 [Shape] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like:0 [04/08/2022-14:45:43] [V] [TRT] Shape__3480 [Shape] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: Shape__3480 for ONNX node: Shape__3480 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Shape__3480:0 for ONNX tensor: Shape__3480:0 [04/08/2022-14:45:43] [V] [TRT] Shape__3480 [Shape] outputs: [Shape__3480:0 -> (4)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Slice__3487 [Slice] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Shape__3480:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: const_starts__3484 [04/08/2022-14:45:43] [V] [TRT] Searching for input: const_ends__3485 [04/08/2022-14:45:43] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:43] [V] [TRT] Slice__3487 [Slice] inputs: [Shape__3480:0 -> (4)[INT32]], [const_starts__3484 -> (1)[INT32]], [const_ends__3485 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: Slice__3487 for ONNX node: Slice__3487 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Slice__3487:0 for ONNX tensor: Slice__3487:0 [04/08/2022-14:45:43] [V] [TRT] Slice__3487 [Slice] outputs: [Slice__3487:0 -> (2)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Slice__3497 [Slice] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Slice__3487:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:43] [V] [TRT] Searching for input: const_starts__2572 [04/08/2022-14:45:43] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:43] [V] [TRT] Slice__3497 [Slice] inputs: [Slice__3487:0 -> (2)[INT32]], [const_starts__807 -> (1)[INT32]], [const_starts__2572 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: Slice__3497 for ONNX node: Slice__3497 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Slice__3497:0 for ONNX tensor: Slice__3497:0 [04/08/2022-14:45:43] [V] [TRT] Slice__3497 [Slice] outputs: [Slice__3497:0 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Slice__3493 [Slice] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Slice__3487:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:43] [V] [TRT] Searching for input: const_starts__807 [04/08/2022-14:45:43] [V] [TRT] Searching for input: const_axes__1985 [04/08/2022-14:45:43] [V] [TRT] Slice__3493 [Slice] inputs: [Slice__3487:0 -> (2)[INT32]], [const_axes__1985 -> (1)[INT32]], [const_starts__807 -> (1)[INT32]], [const_axes__1985 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: Slice__3493 for ONNX node: Slice__3493 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Slice__3493:0 for ONNX tensor: Slice__3493:0 [04/08/2022-14:45:43] [V] [TRT] Slice__3493 [Slice] outputs: [Slice__3493:0 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Mul__3498 [Mul] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Slice__3493:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: Slice__3497:0 [04/08/2022-14:45:43] [V] [TRT] Mul__3498 [Mul] inputs: [Slice__3493:0 -> (1)[INT32]], [Slice__3497:0 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: Mul__3498 for ONNX node: Mul__3498 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Mul__3498:0 for ONNX tensor: Mul__3498:0 [04/08/2022-14:45:43] [V] [TRT] Mul__3498 [Mul] outputs: [Mul__3498:0 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Expand__3499 [Expand] [04/08/2022-14:45:43] [V] [TRT] Searching for input: one__3489 [04/08/2022-14:45:43] [V] [TRT] Searching for input: Mul__3498:0 [04/08/2022-14:45:43] [V] [TRT] Expand__3499 [Expand] inputs: [one__3489 -> ()[INT32]], [Mul__3498:0 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: one__3489 for ONNX node: one__3489 [04/08/2022-14:45:43] [V] [TRT] Registering layer: Expand__3499 for ONNX node: Expand__3499 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Expand__3499:0 for ONNX tensor: Expand__3499:0 [04/08/2022-14:45:43] [V] [TRT] Expand__3499 [Expand] outputs: [Expand__3499:0 -> (-1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: CumSum__3500 [CumSum] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Expand__3499:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart/num_lower:0 [04/08/2022-14:45:43] [V] [TRT] CumSum__3500 [CumSum] inputs: [Expand__3499:0 -> (-1)[INT32]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart/num_lower:0 -> ()[INT32]], [04/08/2022-14:45:43] [V] [TRT] Original shape: (1,), squeezing to: () [04/08/2022-14:45:43] [V] [TRT] Registering tensor: CumSum__3500:0 for ONNX tensor: CumSum__3500:0 [04/08/2022-14:45:43] [V] [TRT] CumSum__3500 [CumSum] outputs: [CumSum__3500:0 -> (-1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Reshape__3501 [Reshape] [04/08/2022-14:45:43] [V] [TRT] Searching for input: CumSum__3500:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: Slice__3487:0 [04/08/2022-14:45:43] [V] [TRT] Reshape__3501 [Reshape] inputs: [CumSum__3500:0 -> (-1)[INT32]], [Slice__3487:0 -> (2)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: Reshape__3501 for ONNX node: Reshape__3501 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Reshape__3501:0 for ONNX tensor: Reshape__3501:0 [04/08/2022-14:45:43] [V] [TRT] Reshape__3501 [Reshape] outputs: [Reshape__3501:0 -> (-1, -1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Div__3502 [Div] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Reshape__3501:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: Slice__3497:0 [04/08/2022-14:45:43] [V] [TRT] Div__3502 [Div] inputs: [Reshape__3501:0 -> (-1, -1)[INT32]], [Slice__3497:0 -> (1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: Div__3502 for ONNX node: Div__3502 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Div__3502:0 for ONNX tensor: Div__3502:0 [04/08/2022-14:45:43] [V] [TRT] Div__3502 [Div] outputs: [Div__3502:0 -> (-1, -1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: pre_Add [Mul] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Slice__3497:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: Div__3502:0 [04/08/2022-14:45:43] [V] [TRT] pre_Add [Mul] inputs: [Slice__3497:0 -> (1)[INT32]], [Div__3502:0 -> (-1, -1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: pre_Add for ONNX node: pre_Add [04/08/2022-14:45:43] [V] [TRT] Registering tensor: pre_add for ONNX tensor: pre_add [04/08/2022-14:45:43] [V] [TRT] pre_Add [Mul] outputs: [pre_add -> (-1, -1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Mod__3503 [Sub] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Reshape__3501:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: pre_add [04/08/2022-14:45:43] [V] [TRT] Mod__3503 [Sub] inputs: [Reshape__3501:0 -> (-1, -1)[INT32]], [pre_add -> (-1, -1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: Mod__3503 for ONNX node: Mod__3503 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Mod__3503:0 for ONNX tensor: Mod__3503:0 [04/08/2022-14:45:43] [V] [TRT] Mod__3503 [Sub] outputs: [Mod__3503:0 -> (-1, -1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Sub__3504 [Sub] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Mod__3503:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: Div__3502:0 [04/08/2022-14:45:43] [V] [TRT] Sub__3504 [Sub] inputs: [Mod__3503:0 -> (-1, -1)[INT32]], [Div__3502:0 -> (-1, -1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: Sub__3504 for ONNX node: Sub__3504 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Sub__3504:0 for ONNX tensor: Sub__3504:0 [04/08/2022-14:45:43] [V] [TRT] Sub__3504 [Sub] outputs: [Sub__3504:0 -> (-1, -1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Greater__3506 [Greater] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Neg__3505:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: Sub__3504:0 [04/08/2022-14:45:43] [V] [TRT] Greater__3506 [Greater] inputs: [Neg__3505:0 -> ()[INT32]], [Sub__3504:0 -> (-1, -1)[INT32]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: Greater__3506 for ONNX node: Greater__3506 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Greater__3506:0 for ONNX tensor: Greater__3506:0 [04/08/2022-14:45:43] [V] [TRT] Greater__3506 [Greater] outputs: [Greater__3506:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Not__3507 [Not] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Greater__3506:0 [04/08/2022-14:45:43] [V] [TRT] Not__3507 [Not] inputs: [Greater__3506:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: Not__3507 for ONNX node: Not__3507 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Not__3507:0 for ONNX tensor: Not__3507:0 [04/08/2022-14:45:43] [V] [TRT] Not__3507 [Not] outputs: [Not__3507:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: Cast__3508 [Cast] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Not__3507:0 [04/08/2022-14:45:43] [V] [TRT] Cast__3508 [Cast] inputs: [Not__3507:0 -> (-1, -1)[BOOL]], [04/08/2022-14:45:43] [V] [TRT] Casting to type: float32 [04/08/2022-14:45:43] [V] [TRT] Registering layer: Cast__3508 for ONNX node: Cast__3508 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: Cast__3508:0 for ONNX tensor: Cast__3508:0 [04/08/2022-14:45:43] [V] [TRT] Cast__3508 [Cast] outputs: [Cast__3508:0 -> (-1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart [Mul] [04/08/2022-14:45:43] [V] [TRT] Searching for input: Cast__3508:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart [Mul] inputs: [Cast__3508:0 -> (-1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/ones_like:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_2 [Sub] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_2 [Sub] inputs: [StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/sub/x:0 -> ()[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/MatrixBandPart:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_2 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_2 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_2:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_2:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_2 [Sub] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_2:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8 [Mul] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_2:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8/y:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8 [Mul] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_2:0 -> (-1, 10, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8/y:0 -> ()[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8/y:0 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8/y:0 [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8:0 for ONNX tensor: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8 [Mul] outputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Parsing node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_3 [Sub] [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5:0 [04/08/2022-14:45:43] [V] [TRT] Searching for input: StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8:0 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_3 [Sub] inputs: [StatefulPartitionedCall/model_2/efficient_global_pointer_1/add_5:0 -> (-1, 10, -1, -1)[FLOAT]], [StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8:0 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Registering layer: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_3 for ONNX node: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_3 [04/08/2022-14:45:43] [V] [TRT] Registering tensor: efficient_global_pointer_1_362 for ONNX tensor: efficient_global_pointer_1 [04/08/2022-14:45:43] [V] [TRT] StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_3 [Sub] outputs: [efficient_global_pointer_1 -> (-1, 10, -1, -1)[FLOAT]], [04/08/2022-14:45:43] [V] [TRT] Marking efficient_global_pointer_1_362 as output: efficient_global_pointer_1 [04/08/2022-14:45:43] [I] Finish parsing network model [04/08/2022-14:45:43] [W] Dynamic dimensions required for input: Input-Segment, but no shapes were provided. Automatically overriding shape to: 1x1 [04/08/2022-14:45:43] [W] Dynamic dimensions required for input: Input-Token, but no shapes were provided. Automatically overriding shape to: 1x1 [04/08/2022-14:45:43] [V] [TRT] Applying generic optimizations to the graph for inference. [04/08/2022-14:45:43] [V] [TRT] Original: 1559 layers [04/08/2022-14:45:43] [V] [TRT] After dead-layer removal: 1559 layers [04/08/2022-14:45:43] [V] [TRT] Running: ConstShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/model_2/Embedding-Norm/mul/ReadVariableOp:0 with (Unnamed Layer* 118) [Shuffle] [04/08/2022-14:45:43] [V] [TRT] Running: ConstShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/model_2/Embedding-Norm/add_1/ReadVariableOp:0 with (Unnamed Layer* 121) [Shuffle] [04/08/2022-14:45:43] [V] [TRT] Running: ConstShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/model_2/Embedding-Mapping/BiasAdd/ReadVariableOp:0 with (Unnamed Layer* 135) [Shuffle] [04/08/2022-14:45:43] [V] [TRT] Running: ConstShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_6/BiasAdd/ReadVariableOp:0 with (Unnamed Layer* 2335) [Shuffle] [04/08/2022-14:45:43] [V] [TRT] Running: ConstShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/model_2/efficient_global_pointer_1/dense_7/BiasAdd/ReadVariableOp:0 with (Unnamed Layer* 2793) [Shuffle] [04/08/2022-14:45:43] [V] [TRT] Running: ConstShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ConstShuffleFusion: Fusing ConstantFolding/StatefulPartitionedCall/model_2/efficient_global_pointer_1/truediv_1_recip:0 with (Unnamed Layer* 2796) [Shuffle] [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1 with Transpose__3509 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_1 + Transpose__3509 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 224) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 224) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1 with Transpose__3515 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_1 + Transpose__3515 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 417) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 417) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1 with Transpose__3524 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_1 + Transpose__3524 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 599) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 599) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/ExpandDims_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1 with Transpose__3527 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_1 + Transpose__3527 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 781) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 781) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1 with Transpose__3533 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_1 + Transpose__3533 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 963) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 963) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1 with Transpose__3539 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_1 + Transpose__3539 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 1145) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 1145) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1 with Transpose__3545 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_1 + Transpose__3545 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 1327) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 1327) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1 with Transpose__3554 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_1 + Transpose__3554 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 1509) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 1509) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1 with Transpose__3560 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_1 + Transpose__3560 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 1691) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 1691) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1 with Transpose__3566 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_1 + Transpose__3566 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 1873) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 1873) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1 with Transpose__3569 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_1 + Transpose__3569 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 2055) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 2055) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1 with Transpose__3578 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_1 + Transpose__3578 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/transpose with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing (Unnamed Layer* 2237) [Shuffle] with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing (Unnamed Layer* 2237) [Shuffle] + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/Reshape_2 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/transpose_1 + StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Reshape_3 with StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_3/Tensordot/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/transpose with StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_1 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/efficient_global_pointer_1/einsum/Reshape_2 [04/08/2022-14:45:43] [V] [TRT] Running: ConstShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/model_2/efficient_global_pointer_1/mul_8/y:0 with (Unnamed Layer* 2993) [Shuffle] [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_1/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_2/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_3/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_4/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_5/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_6/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_7/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_8/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_9/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_10/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_2/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense_1/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/dense/Tensordot/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/Softmax to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found StatefulPartitionedCall/model_2/Transformer-MultiHeadSelfAttention_11/einsum_1/MatMul to be part of self-attention pattern. [04/08/2022-14:45:43] [V] [TRT] Found and reassigned Myelin backends for Self-Attention nodes [04/08/2022-14:45:43] [V] [TRT] After Myelin optimization: 82 layers [04/08/2022-14:45:43] [V] [TRT] Applying ScaleNodes fusions. [04/08/2022-14:45:43] [V] [TRT] After scale fusion: 82 layers [04/08/2022-14:45:43] [V] [TRT] Running: SliceErasure [04/08/2022-14:45:43] [V] [TRT] Removing StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/strided_slice_2 [04/08/2022-14:45:43] [V] [TRT] Running: ShuffleShuffleFusion [04/08/2022-14:45:43] [V] [TRT] ShuffleShuffleFusion: Fusing Unsqueeze__3375 with StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims [04/08/2022-14:45:44] [V] [TRT] After vertical fusions: 80 layers [04/08/2022-14:45:44] [V] [TRT] After dupe layer removal: 80 layers [04/08/2022-14:45:44] [V] [TRT] After final dead-layer removal: 80 layers [04/08/2022-14:45:44] [V] [TRT] After tensor merging: 80 layers [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_307 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:31 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:31 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_306 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:30 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:30 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_305 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:29 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:29 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_304 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:28 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:28 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_303 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:27 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:27 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_302 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:26 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:26 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_301 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:25 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:25 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_300 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:24 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:24 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_299 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:23 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:23 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_298 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:22 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:22 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_297 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:21 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:21 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_296 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:20 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:20 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_295 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:19 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:19 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_294 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:18 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:18 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_293 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:17 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:17 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_292 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:16 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:16 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_291 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:15 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:15 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_290 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:14 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:14 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_289 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:13 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:13 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_288 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:12 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:12 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_287 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:11 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:11 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_286 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:10 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:10 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_285 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:9 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:9 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_284 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:8 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:8 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_283 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:7 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:7 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_282 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:6 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:6 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_281 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:5 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:5 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_280 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:4 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:4 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_279 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:3 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:3 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_278 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:2 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:2 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_277 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:1 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:1 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:0 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_264 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:31 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:31 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_263 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:30 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:30 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_262 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:29 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:29 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_261 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:28 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:28 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_260 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:27 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:27 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_259 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:26 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:26 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_258 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:25 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:25 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_257 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:24 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:24 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_256 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:23 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:23 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_255 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:22 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:22 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_254 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:21 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:21 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_253 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:20 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:20 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_252 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:19 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:19 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_251 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:18 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:18 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_250 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:17 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:17 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_249 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:16 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:16 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_248 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:15 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:15 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_247 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:14 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:14 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_246 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:13 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:13 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_245 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:12 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:12 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_244 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:11 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:11 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_243 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:10 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:10 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_242 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:9 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:9 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_241 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:8 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:8 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_240 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:7 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:7 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_239 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:6 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:6 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_238 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:5 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:5 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_237 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:4 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:4 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_236 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:3 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:3 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_235 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:2 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:2 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1_234 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:1 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:1 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] Eliminating slice StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1 by retargeting StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:0 from StatefulPartitionedCall/model_2/efficient_global_pointer_1/split_1:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 [04/08/2022-14:45:44] [V] [TRT] After slice removal: 16 layers [04/08/2022-14:45:44] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1 [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat_1:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/concat:0 because input is not movable. [04/08/2022-14:45:44] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387 [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 because copy elision not implemented for axis. [04/08/2022-14:45:44] [V] [TRT] Generating copy for StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386:0 to StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 because copy elision not implemented for axis. [04/08/2022-14:45:44] [V] [TRT] After concat removal: 143 layers [04/08/2022-14:45:44] [V] [TRT] Graph construction and optimization completed in 1.26118 seconds. [04/08/2022-14:45:45] [V] [TRT] Using cublasLt as a tactic source [04/08/2022-14:45:45] [W] [TRT] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.4.2 [04/08/2022-14:45:45] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +649, GPU +272, now: CPU 1458, GPU 851 (MiB) [04/08/2022-14:45:45] [V] [TRT] Using cuDNN as a tactic source [04/08/2022-14:45:45] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +178, GPU +268, now: CPU 1636, GPU 1119 (MiB) [04/08/2022-14:45:45] [W] [TRT] TensorRT was linked against cuDNN 8.3.2 but loaded cuDNN 8.2.1 [04/08/2022-14:45:45] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored. [04/08/2022-14:45:45] [V] [TRT] Constructing optimization profile number 0 [1/1]. [04/08/2022-14:45:45] [V] [TRT] Reserving memory for activation tensors. Host: 0 bytes Device: 48 bytes [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> (Unnamed Layer* 2582) [Constant]_output) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.007424 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.00576 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.00576 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> (Unnamed Layer* 2582) [Constant]_output) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.009984 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.005888 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.005888 [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1,1) -> Float(1:4,1,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.009472 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.00576 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.00576 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1,1) -> Float(1:32,1,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.009728 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.005888 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.005888 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1,1) -> Float(1,1,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.006912 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.005888 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.005888 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1,1) -> Float(1:32,1,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.009344 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.005632 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.005632 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:32,1,1) -> Float(1,1,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.006656 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.005632 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.005632 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:32,1,1) -> Float(1:4,1,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.009472 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.005888 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.005888 [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat((Unnamed Layer* 2582) [Constant]_output -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.007168 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.005888 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.005888 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat((Unnamed Layer* 2582) [Constant]_output -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat((Unnamed Layer* 2582) [Constant]_output -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.010624 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat((Unnamed Layer* 2582) [Constant]_output -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.01024 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat((Unnamed Layer* 2582) [Constant]_output -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat((Unnamed Layer* 2582) [Constant]_output -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.01024 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul:0) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006016 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006016 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul:0) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.010496 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul:0) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.010496 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul:0) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.010112 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(1,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.009984 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006016 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006016 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.009984 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006016 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006016 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(1,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.01024 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(1,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos:0 -> ) (Reformat) [04/08/2022-14:45:45] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:45] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:45] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(1,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(1,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(1,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32:32,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:45] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:45] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1,1) -> Float(64,64,2,1) *************** [04/08/2022-14:45:45] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009856 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1,1) -> Float(64,1,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009984 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1,1) -> Float(64,1:4,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.007552 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1,1) -> Float(64,64:32,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.007552 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(64,64,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009984 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(64,1,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009728 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(64,1:4,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009856 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(64,64:32,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009856 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1:4,1,1) -> Float(64,64,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.007552 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1:4,1,1) -> Float(64,1,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.00768 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1:4,1,1) -> Float(64,1:4,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009728 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.0064 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.0064 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1:4,1,1) -> Float(64,64:32,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.0096 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32:32,1,1) -> Float(64,64,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.007552 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32:32,1,1) -> Float(64,1,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.007552 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32:32,1,1) -> Float(64,1:4,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009856 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32:32,1,1) -> Float(64,64:32,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3384:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009856 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1,1) -> Float(64,64,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1,1) -> Float(64,1,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1,1) -> Float(64,1:4,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1,1) -> Float(64,64:32,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(64,64,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(64,1,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(64,1:4,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(64,64:32,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1:4,1,1) -> Float(64,64,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1:4,1,1) -> Float(64,1,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1:4,1,1) -> Float(64,1:4,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,1:4,1,1) -> Float(64,64:32,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32:32,1,1) -> Float(64,64,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32:32,1,1) -> Float(64,1,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32:32,1,1) -> Float(64,1:4,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32:32,1,1) -> Float(64,64:32,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,64,2,1) -> Float(64,1,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009856 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,64,2,1) -> Float(64,1:4,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.007424 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,64,2,1) -> Float(64,64:32,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,1,2,1) -> Float(64,64,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009728 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.005888 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.005888 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,1,2,1) -> Float(64,1:4,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010112 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,1,2,1) -> Float(64,64:32,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.01024 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,1:4,2,1) -> Float(64,64,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,1:4,2,1) -> Float(64,1,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.01024 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006016 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,1:4,2,1) -> Float(64,64:32,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010112 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,64:32,2,1) -> Float(64,64,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,64:32,2,1) -> Float(64,1,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,64:32,2,1) -> Float(64,1:4,2,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Concat__3387:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010112 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,64,1) -> Float(1,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009856 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,64,1) -> Float(1:4,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.00768 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64,64,1) -> Float(64:32,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010624 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(1,64,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009856 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.00576 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.00576 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(1,64,1) -> Float(1:4,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010112 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(1,64,1) -> Float(64:32,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010496 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(1:4,64,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010496 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(1:4,64,1) -> Float(1,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.01024 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(1:4,64,1) -> Float(64:32,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.01024 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64:32,64,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64:32,64,1) -> Float(1,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.01024 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64:32,64,1) -> Float(1:4,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.01024 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.0064 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.0064 [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(1,64,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009472 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.005888 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.005888 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(1:4,64,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010496 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64:32,64,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape:0 -> ) (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.010368 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/split:0 copy (Reformat) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1002 Time: 0.009472 [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(1,64,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(1:4,64,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(64:32,64,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning Reformat: Float(32,32,1) -> Float(64,64,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] =============== Computing reformatting costs [04/08/2022-14:45:46] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning format combination: -> Int32(1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/arange (Fill) [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.005376 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.005376 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Fill Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning format combination: -> Float(32,32,1) *************** [04/08/2022-14:45:46] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning format combination: Int32(1) -> Float(1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast (Cast) [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.005504 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.005504 [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cast (Reformat) [04/08/2022-14:45:46] [V] [TRT] Reformat has no valid tactics for this config, skipping [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Cast Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning format combination: Float(1) -> Float(1,1,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: Unsqueeze__3375 + StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/ExpandDims (Shuffle) [04/08/2022-14:45:46] [V] [TRT] Tactic: 0 Time: 0.005376 [04/08/2022-14:45:46] [V] [TRT] Tactic: 1 Time: 0.009088 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 0 Time: 0.005376 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [04/08/2022-14:45:46] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning format combination: Float(1,1,1), Float(32,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul (ElementWise) [04/08/2022-14:45:46] [V] [TRT] Tactic: 1 Time: 0.005504 [04/08/2022-14:45:46] [V] [TRT] Fastest Tactic: 1 Time: 0.005504 [04/08/2022-14:45:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: ElementWise Tactic: 1 [04/08/2022-14:45:46] [V] [TRT] *************** Autotuning format combination: Float(1:4,1,1), Float(1:4,32,1) -> Float(1:4,32,1) *************** [04/08/2022-14:45:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul (ElementWise) [04/08/2022-14:45:47] [V] [TRT] Tactic: 1 Time: 0.006144 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 1 Time: 0.006144 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: ElementWise Tactic: 1 [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(1:32,1,1), Float(32:32,32,1) -> Float(32:32,32,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/einsum/Mul (ElementWise) [04/08/2022-14:45:47] [V] [TRT] Tactic: 1 Time: 0.006144 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 1 Time: 0.006144 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: ElementWise Tactic: 1 [04/08/2022-14:45:47] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(32,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Cos (Unary) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.005376 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.005376 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Unary Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(32,32,1) -> Float(32,32,1,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386 (Shuffle) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.005248 [04/08/2022-14:45:47] [V] [TRT] Tactic: 1 Time: 0.012544 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.005248 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(1,32,1) -> Float(32,1,1,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386 (Shuffle) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.00576 [04/08/2022-14:45:47] [V] [TRT] Tactic: 1 Time: 0.015872 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.00576 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(1:4,32,1) -> Float(32,1:4,1,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386 (Shuffle) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:47] [V] [TRT] Tactic: 1 Time: 0.014848 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(32:32,32,1) -> Float(32,32:32,1,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/stack_Unsqueeze__3386 (Shuffle) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:47] [V] [TRT] Tactic: 1 Time: 0.016384 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(32,32,1) -> Float(32,32,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Sin (Unary) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.00576 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.00576 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Unary Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(32,32,1) -> Float(32,32,1,1) *************** [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(1,32,1) -> Float(32,1,1,1) *************** [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(1:4,32,1) -> Float(32,1:4,1,1) *************** [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(32:32,32,1) -> Float(32,32:32,1,1) *************** [04/08/2022-14:45:47] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(64,64,2,1) -> Float(64,64,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape (Shuffle) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.005248 [04/08/2022-14:45:47] [V] [TRT] Tactic: 1 Time: 0.009216 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.005248 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(64,1,2,1) -> Float(1,64,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape (Shuffle) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.006272 [04/08/2022-14:45:47] [V] [TRT] Tactic: 1 Time: 0.015872 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.006272 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(64,1:4,2,1) -> Float(1:4,64,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape (Shuffle) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.006144 [04/08/2022-14:45:47] [V] [TRT] Tactic: 1 Time: 0.010496 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.006144 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(64,64:32,2,1) -> Float(64:32,64,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/sinusoidal_position_embedding/Reshape (Shuffle) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.0064 [04/08/2022-14:45:47] [V] [TRT] Tactic: 1 Time: 0.016128 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.0064 [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(64,64,1) -> Float(32,32,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2 (Slice) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.005632 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.005632 [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_2 (Padding) [04/08/2022-14:45:47] [V] [TRT] Padding has no valid tactics for this config, skipping [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Slice Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(64,64,1) -> Float(32,32,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3 (Slice) [04/08/2022-14:45:47] [V] [TRT] Tactic: 0 Time: 0.005888 [04/08/2022-14:45:47] [V] [TRT] Fastest Tactic: 0 Time: 0.005888 [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/model_2/efficient_global_pointer_1/strided_slice_3 (Padding) [04/08/2022-14:45:47] [V] [TRT] Padding has no valid tactics for this config, skipping [04/08/2022-14:45:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Slice Tactic: 0 [04/08/2022-14:45:47] [V] [TRT] =============== Computing costs for [04/08/2022-14:45:47] [V] [TRT] *************** Autotuning format combination: Float(1,1), Float(1,1), Float(64,64,1), Float(64,64,1) -> Float(10,1,1,1) *************** [04/08/2022-14:45:47] [V] [TRT] --------------- Timing Runner: {ForeignNode[(Unnamed Layer* 2964) [Gather]_output[Constant]...StatefulPartitionedCall/model_2/efficient_global_pointer_1/sub_3]} (Myelin) free(): double free detected in tcache 2